39 resultados para source parameters


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aspergillus fumigatus (Af) and Pseudomonas aeruginosa (Pa) are leading fungal and bacterial pathogens, respectively, in many clinical situations. Relevant to this, their interface and co-existence has been studied. In some experiments in vitro, Pa products have been defined that are inhibitory to Af. In some clinical situations, both can be biofilm producers, and biofilm could alter their physiology and affect their interaction. That may be most relevant to airways in cystic fibrosis (CF), where both are often prominent residents. We have studied clinical Pa isolates from several sources for their effects on Af, including testing involving their biofilms. We show that the described inhibition of Af is related to the source and phenotype of the Pa isolate. Pa cells inhibited the growth and formation of Af biofilm from conidia, with CF isolates more inhibitory than non-CF isolates, and non-mucoid CF isolates most inhibitory. Inhibition did not require live Pa contact, as culture filtrates were also inhibitory, and again non-mucoid>mucoid CF>non-CF. Preformed Af biofilm was more resistant to Pa, and inhibition that occurred could be reproduced with filtrates. Inhibition of Af biofilm appears also dependent on bacterial growth conditions; filtrates from Pa grown as biofilm were more inhibitory than from Pa grown planktonically. The differences in Pa shown from these different sources are consistent with the extensive evolutionary Pa changes that have been described in association with chronic residence in CF airways, and may reflect adaptive changes to life in a polymicrobial environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim: Optimise a set of exposure factors, with the lowest effective dose, to delineate spinal curvature with the modified Cobb method in a full spine using computed radiography (CR) for a 5-year-old paediatric anthropomorphic phantom. Methods: Images were acquired by varying a set of parameters: positions (antero-posterior (AP), posteroanterior (PA) and lateral), kilo-voltage peak (kVp) (66-90), source-to-image distance (SID) (150 to 200cm), broad focus and the use of a grid (grid in/out) to analyse the impact on E and image quality (IQ). IQ was analysed applying two approaches: objective [contrast-to-noise-ratio/(CNR] and perceptual, using 5 observers. Monte-Carlo modelling was used for dose estimation. Cohen’s Kappa coefficient was used to calculate inter-observer-variability. The angle was measured using Cobb’s method on lateral projections under different imaging conditions. Results: PA promoted the lowest effective dose (0.013 mSv) compared to AP (0.048 mSv) and lateral (0.025 mSv). The exposure parameters that allowed lower dose were 200cm SID, 90 kVp, broad focus and grid out for paediatrics using an Agfa CR system. Thirty-seven images were assessed for IQ and thirty-two were classified adequate. Cobb angle measurements varied between 16°±2.9 and 19.9°±0.9. Conclusion: Cobb angle measurements can be performed using the lowest dose with a low contrast-tonoise ratio. The variation on measurements for this was ±2.9° and this is within the range of acceptable clinical error without impact on clinical diagnosis. Further work is recommended on improvement to the sample size and a more robust perceptual IQ assessment protocol for observers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To determine whether using different combinations of kVp and mAs with additional filtration can reduce the effective dose to a paediatric phantom whilst maintaining diagnostic image quality. Methods: 27 images of a paediatric AP pelvis phantom were acquired with different kVp, mAs and additional copper filtration. Images were displayed on quality controlled monitors with dimmed lighting. Ten diagnostic radiographers (5 students and 5 experienced radiographers) had eye tests to assess visual acuity before rating the images. Each image was rated for visual image quality against a reference image using 2 alternative forced choice software using a 5-point Likert scale. Physical measures (SNR and CNR) were also taken to assess image quality. Results: Of the 27 images rated, 13 of them were of acceptable image quality and had a dose lower than the image with standard acquisition parameters. Two were produced without filtration, 6 with 0.1mm and 5 with 0.2mm copper filtration. Statistical analysis found that the inter-rater and intra-rater reliability was high. Discussion: It is possible to obtain an image of acceptable image quality with a dose that is lower than published guidelines. There are some areas of the study that could be improved. These include using a wider range of kVp and mAs to give an exact set of parameters to use. Conclusion: Additional filtration has been identified as amajor tool for reducing effective dose whilst maintaining acceptable image quality in a 5 year old phantom.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goals of the present work are the evaluation of the influence of several variables and test parameters on the melt flow index (MFI) of thermoplastics, and the determination of the uncertainty associated with the measurements. To evaluate the influence of test parameters on the measurement of MFI the design of experiments (DOE) approach has been used. The uncertainty has been calculated using a "bottom-up" approach given in the "Guide to the Expression of the Uncertainty of Measurement" (GUM). Since an analytical expression relating the output response (MFI) with input parameters does not exist, it has been necessary to build mathematical models by adjusting the experimental observations of the response variable in accordance with each input parameter. Subsequently, the determination of the uncertainty associated with the measurement of MFI has been performed by applying the law of propagation of uncertainty to the values of uncertainty of the input parameters. Finally, the activation energy (Ea) of the melt flow at around 200 degrees C and the respective uncertainty have also been determined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A strain of Pleurotus ostreatus was grown in tomato pomace as sole carbon source for production of laccase. The culture of P. ostreatus revealed a peak of laccase activity (147 U/L of fermentation broth) on the 4th day of culture with a specific activity of 2.8 U/mg protein. Differential chromatographic behaviour of laccase was investigated on affinity chromatographic matrices containing either urea, acetamide, ethanolamine or IDA as affinity ligands. Laccase exhibited retention on such affinity matrices and it was purified on a Sepharose 6B-BDGE-urea column with final enzyme recoveries of about 60%, specific activity of 6.0 and 18.0 U/mg protein and purification factors in the range of 14-46. It was also possible to demonstrate that metal-free laccase did not adsorb to Sepharose 6B-BDGE-urea column which suggests that adsorption of native laccase on this affinity matrix was apparently due to the specific interaction of carbonyl groups available on the matrix with the active site Cu (II) ions of laccase. The kinetic parameters (V (max), K (m) , K (cat), and K (cat)/K (m) ) of the purified enzyme for several substrates were determined as well as laccase stability and optimum pH and temperature of enzyme activity. This is the first report describing the production of laccase from P. ostreatus grown on tomato pomace and purification of this enzyme based on affinity matrix containing urea as affinity ligand.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Brain dopamine transporters imaging by Single Photon Emission Tomography (SPECT) with 123I-FP-CIT has become an important tool in the diagnosis and evaluation of parkinsonian syndromes, since this radiopharmaceutical exhibits high affinity for membrane transporters responsible for cellular reabsorption of dopamine on the striatum. However, Ordered Subset Expectation Maximization (OSEM) is the method recommended in the literature for imaging reconstruction. Filtered Back Projection (FBP) is still used due to its fast processing, even if it presents some disadvantages. The aim of this work is to investigate the influence of reconstruction parameters for FBP in semiquantification of Brain Studies with 123I-FPCIT compared with those obtained with OSEM recommended reconstruction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trabalho final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica Ramo Manutenção e Produção

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present a deterministic approach to tsunami hazard assessment for the city and harbour of Sines, Portugal, one of the test sites of project ASTARTE (Assessment, STrategy And Risk Reduction for Tsunamis in Europe). Sines has one of the most important deep-water ports, which has oil-bearing, petrochemical, liquid-bulk, coal, and container terminals. The port and its industrial infrastructures face the ocean southwest towards the main seismogenic sources. This work considers two different seismic zones: the Southwest Iberian Margin and the Gloria Fault. Within these two regions, we selected a total of six scenarios to assess the tsunami impact at the test site. The tsunami simulations are computed using NSWING, a Non-linear Shallow Water model wIth Nested Grids. In this study, the static effect of tides is analysed for three different tidal stages: MLLW (mean lower low water), MSL (mean sea level), and MHHW (mean higher high water). For each scenario, the tsunami hazard is described by maximum values of wave height, flow depth, drawback, maximum inundation area and run-up. Synthetic waveforms are computed at virtual tide gauges at specific locations outside and inside the harbour. The final results describe the impact at the Sines test site considering the single scenarios at mean sea level, the aggregate scenario, and the influence of the tide on the aggregate scenario. The results confirm the composite source of Horseshoe and Marques de Pombal faults as the worst-case scenario, with wave heights of over 10 m, which reach the coast approximately 22 min after the rupture. It dominates the aggregate scenario by about 60 % of the impact area at the test site, considering maximum wave height and maximum flow depth. The HSMPF scenario inundates a total area of 3.5 km2. © Author(s) 2015.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.