907 resultados para R-MATRIX METHOD


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Experimental evidences demonstrate that vegetable derived extracts inhibit cholesterol absorption in the gastrointestinal tract. To further explore the mechanisms behind, we modeled duodenal contents with several vegetable extracts. Results: By employing a widely used cholesterol quantification method based on a cholesterol oxidase-peroxidase coupled reaction we analyzed the effects on cholesterol partition. Evidenced interferences were analyzed by studying specific and unspecific inhibitors of cholesterol oxidase-peroxidase coupled reaction. Cholesterol was also quantified by LC/MS. We found a significant interference of diverse (cocoa and tea-derived) extracts over this method. The interference was strongly dependent on model matrix: while as in phosphate buffered saline, the development of unspecific fluorescence was inhibitable by catalase (but not by heat denaturation), suggesting vegetable extract derived H2O2 production, in bile-containing model systems, this interference also comprised cholesterol-oxidase inhibition. Several strategies, such as cholesterol standard addition and use of suitable blanks containing vegetable extracts were tested. When those failed, the use of a mass-spectrometry based chromatographic assay allowed quantification of cholesterol in models of duodenal contents in the presence of vegetable extracts. Conclusions: We propose that the use of cholesterol-oxidase and/or peroxidase based systems for cholesterol analyses in foodstuffs should be accurately monitored, as important interferences in all the components of the enzymatic chain were evident. The use of adequate controls, standard addition and finally, chromatographic analyses solve these issues.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intravenous thrombolysis (IVT) as treatment in acute ischaemic strokes may be insufficient to achieve recanalisation in certain patients. Predicting probability of non-recanalisation after IVT may have the potential to influence patient selection to more aggressive management strategies. We aimed at deriving and internally validating a predictive score for post-thrombolytic non-recanalisation, using clinical and radiological variables. In thrombolysis registries from four Swiss academic stroke centres (Lausanne, Bern, Basel and Geneva), patients were selected with large arterial occlusion on acute imaging and with repeated arterial assessment at 24 hours. Based on a logistic regression analysis, an integer-based score for each covariate of the fitted multivariate model was generated. Performance of integer-based predictive model was assessed by bootstrapping available data and cross validation (delete-d method). In 599 thrombolysed strokes, five variables were identified as independent predictors of absence of recanalisation: Acute glucose > 7 mmol/l (A), significant extracranial vessel STenosis (ST), decreased Range of visual fields (R), large Arterial occlusion (A) and decreased Level of consciousness (L). All variables were weighted 1, except for (L) which obtained 2 points based on β-coefficients on the logistic scale. ASTRAL-R scores 0, 3 and 6 corresponded to non-recanalisation probabilities of 18, 44 and 74 % respectively. Predictive ability showed AUC of 0.66 (95 %CI, 0.61-0.70) when using bootstrap and 0.66 (0.63-0.68) when using delete-d cross validation. In conclusion, the 5-item ASTRAL-R score moderately predicts non-recanalisation at 24 hours in thrombolysed ischaemic strokes. If its performance can be confirmed by external validation and its clinical usefulness can be proven, the score may influence patient selection for more aggressive revascularisation strategies in routine clinical practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The binding energies of deformed even-even nuclei have been analyzed within the framework of a recently proposed microscopic-macroscopic model. We have used the semiclassical Wigner-Kirkwood ̄h expansion up to fourth order, instead of the usual Strutinsky averaging scheme, to compute the shell corrections in a deformed Woods-Saxon potential including the spin-orbit contribution. For a large set of 561 even-even nuclei with Z 8 and N 8, we find an rms deviation from the experiment of 610 keV in binding energies, comparable to the one found for the same set of nuclei using the finite range droplet model of Moller and Nix (656 keV). As applications of our model, we explore its predictive power near the proton and neutron drip lines as well as in the superheavy mass region. Next, we systematically explore the fourth-order Wigner-Kirkwood corrections to the smooth part of the energy. It is found that the ratio of the fourth-order to the second-order corrections behaves in a very regular manner as a function of the asymmetry parameter I=(N−Z)/A. This allows us to absorb the fourth-order corrections into the second-order contributions to the binding energy, which enables us us to simplify and speed up the calculation of deformed nuclei.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: Renal resistive index (RRI) varies directly with renal vascular stiffness and pulse pressure. RRI correlates positively with arteriolosclerosis in damaged kidneys and predicts progressive renal dysfunction. Matrix Gla-protein (MGP) is a vascular calcification inhibitor that needs vitamin K to be activated. Inactive MGP, known as desphospho-uncarboxylated MGP (dp-ucMGP), can be measured in plasma and has been associated with various cardiovascular (CV) markers, CV outcomes and mortality. In this study we hypothesize that increased RRI is associated with high levels of dp-ucMGP. DESIGN AND METHOD: We recruited participants via a multi-center family-based cross-sectional study in Switzerland exploring the role of genes and kidney hemodynamics in blood pressure regulation. Dp-ucMGP was quantified in plasma samples by sandwich ELISA. Renal doppler sonography was performed using a standardized protocol to measure RRIs on 3 segmental arteries in each kidney. The mean of the 6 measures was reported. Multiple regression analysis was performed to estimate associations between RRI and dp-ucMGP adjusting for sex, age, pulse pressure, mean pressure, renal function and other CV risk factors. RESULTS: We included 1035 participants in our analyses. Mean values were 0.64 ± 0.06 for RRI and 0.44 ± 0.21 (nmol/L) for dp-ucMGP. RRI was positively associated with dp-ucMGP both before and after adjustment for sex, age, body mass index, pulse pressure, mean pressure, heart rate, renal function, low and high density lipoprotein, smoking status, diabetes, blood pressure and cholesterol lowering drugs, and history of CV disease (P < 0.001). CONCLUSIONS: RRI is independently and positively associated with high levels of dp-ucMGP after adjustment for pulse pressure and common CV risk factors. Further studies are needed to determine if vitamin K supplementation can have a positive effect on renal vascular stiffness and kidney function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissolved organic matter (DOM) is a complex mixture of organic compounds, ubiquitous in marine and freshwater systems. Fluorescence spectroscopy, by means of Excitation-Emission Matrices (EEM), has become an indispensable tool to study DOM sources, transport and fate in aquatic ecosystems. However the statistical treatment of large and heterogeneous EEM data sets still represents an important challenge for biogeochemists. Recently, Self-Organising Maps (SOM) has been proposed as a tool to explore patterns in large EEM data sets. SOM is a pattern recognition method which clusterizes and reduces the dimensionality of input EEMs without relying on any assumption about the data structure. In this paper, we show how SOM, coupled with a correlation analysis of the component planes, can be used both to explore patterns among samples, as well as to identify individual fluorescence components. We analysed a large and heterogeneous EEM data set, including samples from a river catchment collected under a range of hydrological conditions, along a 60-km downstream gradient, and under the influence of different degrees of anthropogenic impact. According to our results, chemical industry effluents appeared to have unique and distinctive spectral characteristics. On the other hand, river samples collected under flash flood conditions showed homogeneous EEM shapes. The correlation analysis of the component planes suggested the presence of four fluorescence components, consistent with DOM components previously described in the literature. A remarkable strength of this methodology was that outlier samples appeared naturally integrated in the analysis. We conclude that SOM coupled with a correlation analysis procedure is a promising tool for studying large and heterogeneous EEM data sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tutkielman tavoitteena on määritellä keskeiset ja sopivat asiakasportfoliomallit ja asiakasmatriisit asiakassuhteen määrittämiseen. Tutkimus keskittyy asiakassuhteen arvottamiseen ja avainasiakkaiden määrittämiseen kohdeyrityksessä. Keskeisimmät ja sopivimmat asiakasportfliomallit huomioidaan asiakkaiden arvioinnissa. Tutkielman teoriaosassa esitellään tunnetuimmat ja käytetyimmät asiakasportfoliomallit ja matriisit alan kirjallisuuden perusteella. Tämän lisäksi asiakasportfoliomalleihin yhdistetään näkökulmia suhdemarkkinoinnin, asiakkuuksien johtamisen ja tuoteportfolioiden teorioista. Keskeisimmät kirjallisuuden lähteet ovat johtamisen ja markkinoinnin alalta. Tutkielman empiriaosassa esitellään kohdeyritys ja sen tämän hetkinen asiakassuhteiden johtamiskäytäntö. Lisäksi tehdään parannusehdotuksia kohdeyrityksen nykyiseen asiakassuhteiden arvottamismenetelmään jotta asiakassuhteiden arvon laskeminen vastaisi mahdollisimman hyvin kohdeyrityksen nykyisiä tarpeita. Asiakassuhteen arvon määrittämiseksi käytetään myös fokusryhmähaastattelua. Avainasiakkaat määritellään ja tilannetta havainnollistetaan sijoittamalla avainasiakkaat asiakasportfolioon.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The semiclassical Wigner-Kirkwood ̄h expansion method is used to calculate shell corrections for spherical and deformed nuclei. The expansion is carried out up to fourth order in ̄h. A systematic study of Wigner-Kirkwood averaged energies is presented as a function of the deformation degrees of freedom. The shell corrections, along with the pairing energies obtained by using the Lipkin-Nogami scheme, are used in the microscopic-macroscopic approach to calculate binding energies. The macroscopic part is obtained from a liquid drop formula with six adjustable parameters. Considering a set of 367 spherical nuclei, the liquid drop parameters are adjusted to reproduce the experimental binding energies, which yields a root mean square (rms) deviation of 630 keV. It is shown that the proposed approach is indeed promising for the prediction of nuclear masses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Very large molecular systems can be calculated with the so called CNDOL approximate Hamiltonians that have been developed by avoiding oversimplifications and only using a priori parameters and formulas from the simpler NDO methods. A new diagonal monoelectronic term named CNDOL/21 shows great consistency and easier SCF convergence when used together with an appropriate function for charge repulsion energies that is derived from traditional formulas. It is possible to obtain a priori molecular orbitals and electron excitation properties after the configuration interaction of single excited determinants with reliability, maintaining interpretative possibilities even being a simplified Hamiltonian. Tests with some unequivocal gas phase maxima of simple molecules (benzene, furfural, acetaldehyde, hexyl alcohol, methyl amine, 2,5 dimethyl 2,4 hexadiene, and ethyl sulfide) ratify the general quality of this approach in comparison with other methods. The calculation of large systems as porphine in gas phase and a model of the complete retinal binding pocket in rhodopsin with 622 basis functions on 280 atoms at the quantum mechanical level show reliability leading to a resulting first allowed transition in 483 nm, very similar to the known experimental value of 500 nm of "dark state." In this very important case, our model gives a central role in this excitation to a charge transfer from the neighboring Glu(-) counterion to the retinaldehyde polyene chain. Tests with gas phase maxima of some important molecules corroborate the reliability of CNDOL/2 Hamiltonians.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The functional method is a new test theory using a new scoring method that assumes complexity in test structure, and thus takes into account every correlation between factors and items. The main specificity of the functional method is to model test scores by multiple regression instead of estimating them by using simplistic sums of points. In order to proceed, the functional method requires the creation of hyperspherical measurement space, in which item responses are expressed by their correlation with orthogonal factors. This method has three main qualities. First, measures are expressed in the absolute metric of correlations; therefore, items, scales and persons are expressed in the same measurement space using the same single metric. Second, factors are systematically orthogonal and without errors, which is optimal in order to predict other outcomes. Such predictions can be performed to estimate how one would answer to other tests, or even to model one's response strategy if it was perfectly coherent. Third, the functional method provides measures of individuals' response validity (i.e., control indices). Herein, we propose a standard procedure in order to identify whether test results are interpretable and to exclude invalid results caused by various response biases based on control indices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Organizations gain resources, skills and technologies to find out the ultimate mix of capabilities to be a winner in the competitive market. These are all important factors that need to be taken into account in organizations operating in today's business environment. So far, there are no significant studies on the organizational capabilities in the field of PSM. The literature review shows that the PSM capabilities need to be studied more comprehensively. This study attempts to reveal and fill this gap by providing the PSM capability matrix that identifies the key PSM capabilities approached from two angles: there are three primary PSM capabilities and nine subcapabilities and, moreover, the individual and organizational PSM capabilities are identified and evaluated. The former refers to the PSM capability matrix of this study which is based on the strategic and operative PSM capabilities that complement the economic ones, while the latter relates to the evaluation of the PSM capabilities, such as the buyer profiles of individual PSM capabilities and the PSMcapability map of the organizational ones. This is a constructive case study. The aim is to define what the purchasing and supply management capabilities are and how they can be evaluated. This study presents a PSM capability matrix to identify and evaluate the capabilities to define capability gaps by comparing the ideal level of PSM capabilities to the realized ones. The research questions are investigated with two case organizations. This study argues that PSM capabilities can be classified into three primary categories with nine sub-categories and, thus, a PSM capability matrix with four evaluation categories can be formed. The buyer profiles are moreover identified to reveal the PSM capability gap. The resource-based view (RBV) and dynamic capabilities view (DCV) are used to define the individual and organizational capabilities. The PSM literature is also used to define the capabilities. The key findings of this study are i) the PSM capability matrix to identify the PSM capabilities, ii) the evaluation of the capabilities to define PSM capability gaps and iii) the presentation of the buyer profiles to identify the individual PSM capabilities and to define the organizational PSM capabilities. Dynamic capabilities are also related to the PSM capability gap. If a gap is identified, the organization can renew their PSM capabilities and, thus, create mutual learning and increase their organizational capabilities. And only then, there is potential for dynamic capabilities. Based on this, the purchasing strategy, purchasing policy and procedures should be identified and implemented dynamically.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The analysis of rockfall characteristics and spatial distribution is fundamental to understand and model the main factors that predispose to failure. In our study we analysed LiDAR point clouds aiming to: (1) detect and characterise single rockfalls; (2) investigate their spatial distribution. To this end, different cluster algorithms were applied: 1a) Nearest Neighbour Clutter Removal (NNCR) in combination with the Expectation?Maximization (EM) in order to separate feature points from clutter; 1b) a density based algorithm (DBSCAN) was applied to isolate the single clusters (i.e. the rockfall events); 2) finally we computed the Ripley's K-function to investigate the global spatial pattern of the extracted rockfalls. The method allowed proper identification and characterization of more than 600 rockfalls occurred on a cliff located in Puigcercos (Catalonia, Spain) during a time span of six months. The spatial distribution of these events proved that rockfall were clustered distributed at a welldefined distance-range. Computations were carried out using R free software for statistical computing and graphics. The understanding of the spatial distribution of precursory rockfalls may shed light on the forecasting of future failures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study aimed at comparing the efficiency of various sampling materials for the collection and subsequent analysis of organic gunshot residues (OGSR). To the best of our knowledge, it is the first time that sampling devices were investigated in detail for further quantitation of OGSR by LC-MS. Seven sampling materials, namely two "swab"-type and five "stub"-type collection materials, were tested. The investigation started with the development of a simple and robust LC-MS method able to separate and quantify molecules typically found in gunpowders, such as diphenylamine or ethylcentralite. The evaluation of sampling materials was then systematically carried out by first analysing blank extracts of the materials to check for potential interferences and determining matrix effects. Based on these results, the best four materials, namely cotton buds, polyester swabs, a tape from 3M and PTFE were compared in terms of collection efficiency during shooting experiments using a set of 9 mm Luger ammunition. It was found that the tape was capable of recovering the highest amounts of OGSR. As tape-lifting is the technique currently used in routine for inorganic GSR, OGSR analysis might be implemented without modifying IGSR sampling and analysis procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This review has tried to collect and correlate all the various equations for the g matrix of strong field d5 systems obtained from different basis sets using full electron and hole formalism calculations. It has corrected mistakes found in the literature and shown how the failure to properly take in symmetry boundary conditions has produced a variety of apparently inconsistent equations in the literature. The review has reexamined the problem of spin-orbit interaction with excited t4e states and finds that the earlier reports that it is zero in octahedral symmetry is not correct. It has shown how redefining what x, y, and z are in the principal coordinate system simplifies, compared to previous methods, the analysis of experimental g values with the equations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Needle trap devices (NTDs) are a relatively new and promising tool for headspace (HS) analysis. In this study, a dynamic HS sampling procedure is evaluated for the determination of volatile organic compounds (VOCs) in whole blood samples. A full factorial design was used to evaluate the influence of the number of cycles and incubation time and it is demonstrated that the controlling factor in the process is the number of cycles. A mathematical model can be used to determine the most appropriate number of cycles required to adsorb a prefixed amount of VOCs present in the HS phase whenever quantitative adsorption is reached in each cycle. Matrix effect is of great importance when complex biological samples, such as blood, are analyzed. The evaluation of the salting out effect showed a significant improvement in the volatilization of VOCs to the HS in this type of matrices. Moreover, a 1:4 (blood:water) dilution is required to obtain quantitative recoveries of the target analytes when external calibration is used. The method developed gives detection limits in the 0.020–0.080 μg L−1 range (0.1–0.4 μg L−1 range for undiluted blood samples) with appropriate repeatability values (RSD < 15% at high level and <23% at LOQ level). Figure of merits of the method can be improved by using a smaller phase ratio (i.e., an increase in the blood volume and a decrease in the HS volume), which lead to lower detection limits, better repeatability values and greater sensibility. Twenty-eight blood samples have been evaluated with the proposed method and the results agree with those indicated in other studies. Benzene was the only target compound that gave significant differences between blood levels detected in volunteer non-smokers and smokers