929 resultados para Matrix Transform Method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tissue-engineered grafts for the urinary tract are being investigated for the potential treatment of several urologic diseases. These grafts, predominantly tubular-shaped, usually require in vitro culture prior to implantation to allow cell engraftment on initially cell-free scaffolds. We have developed a method to produce tubular-shaped collagen scaffolds based on plastic compression. Our approach produces a ready cell-seeded graft that does not need further in vitro culture prior to implantation. The tubular collagen scaffolds were in particular investigated for their structural, mechanical and biological properties. The resulting construct showed an especially high collagen density, and was characterized by favorable mechanical properties assessed by axial extension and radial dilation. Young modulus in particular was greater than non-compressed collagen tubes. Seeding densities affected proliferation rate of primary human bladder smooth muscle cells. An optimal seeding density of 10(6) cells per construct resulted in a 25-fold increase in Alamar blue-based fluorescence after 2 wk in culture. These high-density collagen gel tubes, ready seeded with smooth muscle cells could be further seeded with urothelial cells, drastically shortening the production time of graft for urinary tract regeneration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a heuristic method for learning error correcting output codes matrices based on a hierarchical partition of the class space that maximizes a discriminative criterion. To achieve this goal, the optimal codeword separation is sacrificed in favor of a maximum class discrimination in the partitions. The creation of the hierarchical partition set is performed using a binary tree. As a result, a compact matrix with high discrimination power is obtained. Our method is validated using the UCI database and applied to a real problem, the classification of traffic sign images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Generalized Born methods are currently among the solvation models most commonly used for biological applications. We reformulate the generalized Born molecular volume method initially described by (Lee et al, 2003, J Phys Chem, 116, 10606; Lee et al, 2003, J Comp Chem, 24, 1348) using fast Fourier transform convolution integrals. Changes in the initial method are discussed and analyzed. Finally, the method is extensively checked with snapshots from common molecular modeling applications: binding free energy computations and docking. Biologically relevant test systems are chosen, including 855-36091 atoms. It is clearly demonstrated that, precision-wise, the proposed method performs as good as the original, and could better benefit from hardware accelerated boards.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Health assessment and medical surveillance of workers exposed to combustion nanoparticles are challenging. The aim was to evaluate the feasibility of using exhaled breath condensate (EBC) from healthy volunteers for (1) assessing the lung deposited dose of combustion nanoparticles and (2) determining the resulting oxidative stress by measuring hydrogen peroxide (H2O2) and malondialdehyde (MDA). Methods: Fifteen healthy nonsmoker volunteers were exposed to three different levels of sidestream cigarette smoke under controlled conditions. EBC was repeatedly collected before, during, and 1 and 2 hr after exposure. Exposure variables were measured by direct reading instruments and by active sampling. The different EBC samples were analyzed for particle number concentration (light-scattering-based method) and for selected compounds considered oxidative stress markers. Results: Subjects were exposed to an average airborne concentration up to 4.3×10(5) particles/cm(3) (average geometric size ∼60-80 nm). Up to 10×10(8) particles/mL could be measured in the collected EBC with a broad size distribution (50(th) percentile ∼160 nm), but these biological concentrations were not related to the exposure level of cigarette smoke particles. Although H2O2 and MDA concentrations in EBC increased during exposure, only H2O2 showed a transient normalization 1 hr after exposure and increased afterward. In contrast, MDA levels stayed elevated during the 2 hr post exposure. Conclusions: The use of diffusion light scattering for particle counting proved to be sufficiently sensitive to detect objects in EBC, but lacked the specificity for carbonaceous tobacco smoke particles. Our results suggest two phases of oxidation markers in EBC: first, the initial deposition of particles and gases in the lung lining liquid, and later the start of oxidative stress with associated cell membrane damage. Future studies should extend the follow-up time and should remove gases or particles from the air to allow differentiation between the different sources of H2O2 and MDA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this work was to estimate the genetic diversity of improved banana diploids using data from quantitative analysis and from simple sequence repeats (SSR) marker, simultaneously. The experiment was carried out with 33 diploids, in an augmented block design with 30 regular treatments and three common ones. Eighteen agronomic characteristics and 20 SSR primers were used. The agronomic characteristics and the SSR were analyzed simultaneously by the Ward-MLM, cluster, and IML procedures. The Ward clustering method considered the combined matrix obtained by the Gower algorithm. The Ward-MLM procedure identified three ideal groups (G1, G2, and G3) based on pseudo-F and pseudo-t² statistics. The dendrogram showed relative similarity between the G1 genotypes, justified by genealogy. In G2, 'Calcutta 4' appears in 62% of the genealogies. Similar behavior was observed in G3, in which the 028003-01 diploid is the male parent of the 086079-10 and 042079-06 genotypes. The method with canonical variables had greater discriminatory power than Ward-MLM. Although reduced, the genetic variability available is sufficient to be used in the development of new hybrids.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a method for computing JPEG quantization matrices for a given mean square error or PSNR. Then, we employ our method to compute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. Firstly, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Secondly, we study the JPEG standard progressive operation mode from a quantization based approach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be reduced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this work was to propose a way of using the Tocher's method of clustering to obtain a matrix similar to the cophenetic one obtained for hierarchical methods, which would allow the calculation of a cophenetic correlation. To illustrate the obtention of the proposed cophenetic matrix, we used two dissimilarity matrices - one obtained with the generalized squared Mahalanobis distance and the other with the Euclidean distance - between 17 garlic cultivars, based on six morphological characters. Basically, the proposal for obtaining the cophenetic matrix was to use the average distances within and between clusters, after performing the clustering. A function in R language was proposed to compute the cophenetic matrix for Tocher's method. The empirical distribution of this correlation coefficient was briefly studied. For both dissimilarity measures, the values of cophenetic correlation obtained for the Tocher's method were higher than those obtained with the hierarchical methods (Ward's algorithm and average linkage - UPGMA). Comparisons between the clustering made with the agglomerative hierarchical methods and with the Tocher's method can be performed using a criterion in common: the correlation between matrices of original and cophenetic distances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electrical Impedance Tomography (EIT) is an imaging method which enables a volume conductivity map of a subject to be produced from multiple impedance measurements. It has the potential to become a portable non-invasive imaging technique of particular use in imaging brain function. Accurate numerical forward models may be used to improve image reconstruction but, until now, have employed an assumption of isotropic tissue conductivity. This may be expected to introduce inaccuracy, as body tissues, especially those such as white matter and the skull in head imaging, are highly anisotropic. The purpose of this study was, for the first time, to develop a method for incorporating anisotropy in a forward numerical model for EIT of the head and assess the resulting improvement in image quality in the case of linear reconstruction of one example of the human head. A realistic Finite Element Model (FEM) of an adult human head with segments for the scalp, skull, CSF, and brain was produced from a structural MRI. Anisotropy of the brain was estimated from a diffusion tensor-MRI of the same subject and anisotropy of the skull was approximated from the structural information. A method for incorporation of anisotropy in the forward model and its use in image reconstruction was produced. The improvement in reconstructed image quality was assessed in computer simulation by producing forward data, and then linear reconstruction using a sensitivity matrix approach. The mean boundary data difference between anisotropic and isotropic forward models for a reference conductivity was 50%. Use of the correct anisotropic FEM in image reconstruction, as opposed to an isotropic one, corrected an error of 24 mm in imaging a 10% conductivity decrease located in the hippocampus, improved localisation for conductivity changes deep in the brain and due to epilepsy by 4-17 mm, and, overall, led to a substantial improvement on image quality. This suggests that incorporation of anisotropy in numerical models used for image reconstruction is likely to improve EIT image quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissolved organic matter (DOM) is a complex mixture of organic compounds, ubiquitous in marine and freshwater systems. Fluorescence spectroscopy, by means of Excitation-Emission Matrices (EEM), has become an indispensable tool to study DOM sources, transport and fate in aquatic ecosystems. However the statistical treatment of large and heterogeneous EEM data sets still represents an important challenge for biogeochemists. Recently, Self-Organising Maps (SOM) has been proposed as a tool to explore patterns in large EEM data sets. SOM is a pattern recognition method which clusterizes and reduces the dimensionality of input EEMs without relying on any assumption about the data structure. In this paper, we show how SOM, coupled with a correlation analysis of the component planes, can be used both to explore patterns among samples, as well as to identify individual fluorescence components. We analysed a large and heterogeneous EEM data set, including samples from a river catchment collected under a range of hydrological conditions, along a 60-km downstream gradient, and under the influence of different degrees of anthropogenic impact. According to our results, chemical industry effluents appeared to have unique and distinctive spectral characteristics. On the other hand, river samples collected under flash flood conditions showed homogeneous EEM shapes. The correlation analysis of the component planes suggested the presence of four fluorescence components, consistent with DOM components previously described in the literature. A remarkable strength of this methodology was that outlier samples appeared naturally integrated in the analysis. We conclude that SOM coupled with a correlation analysis procedure is a promising tool for studying large and heterogeneous EEM data sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multispectral images contain information from several spectral wavelengths and currently multispectral images are widely used in remote sensing and they are becoming more common in the field of computer vision and in industrial applications. Typically, one multispectral image in remote sensing may occupy hundreds of megabytes of disk space and several this kind of images may be received from a single measurement. This study considers the compression of multispectral images. The lossy compression is based on the wavelet transform and we compare the suitability of different waveletfilters for the compression. A method for selecting a wavelet filter for the compression and reconstruction of multispectral images is developed. The performance of the multidimensional wavelet transform based compression is compared to other compression methods like PCA, ICA, SPIHT, and DCT/JPEG. The quality of the compression and reconstruction is measured by quantitative measures like signal-to-noise ratio. In addition, we have developed a qualitative measure, which combines the information from the spatial and spectral dimensions of a multispectral image and which also accounts for the visual quality of the bands from the multispectral images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: Renal resistive index (RRI) varies directly with renal vascular stiffness and pulse pressure. RRI correlates positively with arteriolosclerosis in damaged kidneys and predicts progressive renal dysfunction. Matrix Gla-protein (MGP) is a vascular calcification inhibitor that needs vitamin K to be activated. Inactive MGP, known as desphospho-uncarboxylated MGP (dp-ucMGP), can be measured in plasma and has been associated with various cardiovascular (CV) markers, CV outcomes and mortality. In this study we hypothesize that increased RRI is associated with high levels of dp-ucMGP. DESIGN AND METHOD: We recruited participants via a multi-center family-based cross-sectional study in Switzerland exploring the role of genes and kidney hemodynamics in blood pressure regulation. Dp-ucMGP was quantified in plasma samples by sandwich ELISA. Renal doppler sonography was performed using a standardized protocol to measure RRIs on 3 segmental arteries in each kidney. The mean of the 6 measures was reported. Multiple regression analysis was performed to estimate associations between RRI and dp-ucMGP adjusting for sex, age, pulse pressure, mean pressure, renal function and other CV risk factors. RESULTS: We included 1035 participants in our analyses. Mean values were 0.64 ± 0.06 for RRI and 0.44 ± 0.21 (nmol/L) for dp-ucMGP. RRI was positively associated with dp-ucMGP both before and after adjustment for sex, age, body mass index, pulse pressure, mean pressure, heart rate, renal function, low and high density lipoprotein, smoking status, diabetes, blood pressure and cholesterol lowering drugs, and history of CV disease (P < 0.001). CONCLUSIONS: RRI is independently and positively associated with high levels of dp-ucMGP after adjustment for pulse pressure and common CV risk factors. Further studies are needed to determine if vitamin K supplementation can have a positive effect on renal vascular stiffness and kidney function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissolved organic matter (DOM) is a complex mixture of organic compounds, ubiquitous in marine and freshwater systems. Fluorescence spectroscopy, by means of Excitation-Emission Matrices (EEM), has become an indispensable tool to study DOM sources, transport and fate in aquatic ecosystems. However the statistical treatment of large and heterogeneous EEM data sets still represents an important challenge for biogeochemists. Recently, Self-Organising Maps (SOM) has been proposed as a tool to explore patterns in large EEM data sets. SOM is a pattern recognition method which clusterizes and reduces the dimensionality of input EEMs without relying on any assumption about the data structure. In this paper, we show how SOM, coupled with a correlation analysis of the component planes, can be used both to explore patterns among samples, as well as to identify individual fluorescence components. We analysed a large and heterogeneous EEM data set, including samples from a river catchment collected under a range of hydrological conditions, along a 60-km downstream gradient, and under the influence of different degrees of anthropogenic impact. According to our results, chemical industry effluents appeared to have unique and distinctive spectral characteristics. On the other hand, river samples collected under flash flood conditions showed homogeneous EEM shapes. The correlation analysis of the component planes suggested the presence of four fluorescence components, consistent with DOM components previously described in the literature. A remarkable strength of this methodology was that outlier samples appeared naturally integrated in the analysis. We conclude that SOM coupled with a correlation analysis procedure is a promising tool for studying large and heterogeneous EEM data sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tutkielman tavoitteena on määritellä keskeiset ja sopivat asiakasportfoliomallit ja asiakasmatriisit asiakassuhteen määrittämiseen. Tutkimus keskittyy asiakassuhteen arvottamiseen ja avainasiakkaiden määrittämiseen kohdeyrityksessä. Keskeisimmät ja sopivimmat asiakasportfliomallit huomioidaan asiakkaiden arvioinnissa. Tutkielman teoriaosassa esitellään tunnetuimmat ja käytetyimmät asiakasportfoliomallit ja matriisit alan kirjallisuuden perusteella. Tämän lisäksi asiakasportfoliomalleihin yhdistetään näkökulmia suhdemarkkinoinnin, asiakkuuksien johtamisen ja tuoteportfolioiden teorioista. Keskeisimmät kirjallisuuden lähteet ovat johtamisen ja markkinoinnin alalta. Tutkielman empiriaosassa esitellään kohdeyritys ja sen tämän hetkinen asiakassuhteiden johtamiskäytäntö. Lisäksi tehdään parannusehdotuksia kohdeyrityksen nykyiseen asiakassuhteiden arvottamismenetelmään jotta asiakassuhteiden arvon laskeminen vastaisi mahdollisimman hyvin kohdeyrityksen nykyisiä tarpeita. Asiakassuhteen arvon määrittämiseksi käytetään myös fokusryhmähaastattelua. Avainasiakkaat määritellään ja tilannetta havainnollistetaan sijoittamalla avainasiakkaat asiakasportfolioon.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Organizations gain resources, skills and technologies to find out the ultimate mix of capabilities to be a winner in the competitive market. These are all important factors that need to be taken into account in organizations operating in today's business environment. So far, there are no significant studies on the organizational capabilities in the field of PSM. The literature review shows that the PSM capabilities need to be studied more comprehensively. This study attempts to reveal and fill this gap by providing the PSM capability matrix that identifies the key PSM capabilities approached from two angles: there are three primary PSM capabilities and nine subcapabilities and, moreover, the individual and organizational PSM capabilities are identified and evaluated. The former refers to the PSM capability matrix of this study which is based on the strategic and operative PSM capabilities that complement the economic ones, while the latter relates to the evaluation of the PSM capabilities, such as the buyer profiles of individual PSM capabilities and the PSMcapability map of the organizational ones. This is a constructive case study. The aim is to define what the purchasing and supply management capabilities are and how they can be evaluated. This study presents a PSM capability matrix to identify and evaluate the capabilities to define capability gaps by comparing the ideal level of PSM capabilities to the realized ones. The research questions are investigated with two case organizations. This study argues that PSM capabilities can be classified into three primary categories with nine sub-categories and, thus, a PSM capability matrix with four evaluation categories can be formed. The buyer profiles are moreover identified to reveal the PSM capability gap. The resource-based view (RBV) and dynamic capabilities view (DCV) are used to define the individual and organizational capabilities. The PSM literature is also used to define the capabilities. The key findings of this study are i) the PSM capability matrix to identify the PSM capabilities, ii) the evaluation of the capabilities to define PSM capability gaps and iii) the presentation of the buyer profiles to identify the individual PSM capabilities and to define the organizational PSM capabilities. Dynamic capabilities are also related to the PSM capability gap. If a gap is identified, the organization can renew their PSM capabilities and, thus, create mutual learning and increase their organizational capabilities. And only then, there is potential for dynamic capabilities. Based on this, the purchasing strategy, purchasing policy and procedures should be identified and implemented dynamically.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reversed phase liquid chromatography (RPLC) coupled to mass spectrometry (MS) is the gold standard technique in bioanalysis. However, hydrophilic interaction chromatography (HILIC) could represent a viable alternative to RPLC for the analysis of polar and/or ionizable compounds, as it often provides higher MS sensitivity and alternative selectivity. Nevertheless, this technique can be also prone to matrix effects (ME). ME are one of the major issues in quantitative LC-MS bioanalysis. To ensure acceptable method performance (i.e., trueness and precision), a careful evaluation and minimization of ME is required. In the present study, the incidence of ME in HILIC-MS/MS and RPLC-MS/MS was compared for plasma and urine samples using two representative sets of 38 pharmaceutical compounds and 40 doping agents, respectively. The optimal generic chromatographic conditions in terms of selectivity with respect to interfering compounds were established in both chromatographic modes by testing three different stationary phases in each mode with different mobile phase pH. A second step involved the assessment of ME in RPLC and HILIC under the best generic conditions, using the post-extraction addition method. Biological samples were prepared using two different sample pre-treatments, i.e., a non-selective sample clean-up procedure (protein precipitation and simple dilution for plasma and urine samples, respectively) and a selective sample preparation, i.e., solid phase extraction for both matrices. The non-selective pretreatments led to significantly less ME in RPLC vs. HILIC conditions regardless of the matrix. On the contrary, HILIC appeared as a valuable alternative to RPLC for plasma and urine samples treated by a selective sample preparation. Indeed, in the case of selective sample preparation, the compounds influenced by ME were different in HILIC and RPLC, and lower and similar ME occurrence was generally observed in RPLC vs. HILIC for urine and plasma samples, respectively. The complementary of both chromatographic modes was also demonstrated, as ME was observed only scarcely for urine and plasma samples when selecting the most appropriate chromatographic mode.