952 resultados para angular correlation coefficient


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introducción. El cateterismo cardiaco derecho representa el estándar de referencia para el diagnóstico de hipertensión pulmonar, sin embargo el rendimiento de la ecocardiografía como estudio inicial ha mostrado buena correlación con las variables medidas por cateterismo. El presente estudio pretende describir el grado de correlación y concordancia entre la ecocardiografía y el cateterismo cardiaco derecho para la medición de la presión sistólica de la arteria pulmonar. Materiales y métodos. Se realizó un estudio observacional retrospectivo de los pacientes sometidos a cateterismo cardiaco derecho entre los años 2009 a 2014 y se compararon con los datos de ecocardiograma más cercano a este cateterismo, teniendo en cuenta la presión sistólica de la arteria pulmonar (PSAP) en las dos modalidades diagnósticas mediante correlación y concordancia estadística según los coeficientes de Pearson y el índice de Lin respectivamente. Resultados. Se recolectaron un total de 169 pacientes con un índice de correlación (r) obtenido para la medición de PSAP del total de la muestra de 0.73 p < 0.0001 mostrando un grado de correlación alto para toda la muestra evaluada. El análisis de concordancia obtenido para toda la población a partir del índice de Lin fue de 0.71 lo que determinó una pobre concordancia. Discusión. Se encontró buena correlación entre ecocardiografía y cateterismo cardiaco derecho para la medición de la PSAP, sin embargo la concordancia entre los métodos diagnósticos es pobre, por tanto el ecocardiograma no reemplaza al cateterismo cardiaco derecho como estudio de elección para diagnóstico y seguimiento de pacientes con hipertensión pulmonar.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La nostra investigació s'inscriu en la concepció dinàmica de la intel·ligència, i concretament en el processos que configuren el processament cerebral en el Model d'integració de la informació descrit per Das, Kirby i Jarman (1979). Els dos processos cerebrals que constitueixen la base de la conducta intel·ligent són el processament simultani i el processament seqüencial; són les dues estratègies principals del processament de la informació. Tota classe d'estímul és susceptible d'ésser processat o bé seqüencialment (seriació, verbal, anàlisi), o be simultàniament (global, visual, síntesi). Basant-nos en el recull bibliogràfic i amb la convicció de que apropant-nos al coneixement de les peculiaritats del processament de la informació, ens endinsem en la comprensió del procés que mena a la conducta intel·ligent, i per tant, a l'aprenentatge, formulem la següent hipòtesi de treball: en els nens de preescolar (d'entre els 3 i els sis anys) es donaran aquest dos tipus de processament i variaran en funció de l'edat, el sexe, l'atenció, les dificultats d'aprenentatge, els problemes de llenguatge, el bilingüisme, el nivell sociocultural, la dominància manual, el nivell mental i de la presència de patologia. Les diferències que s'esdevinguin ens permetran de formular criteris i pautes per a la intervenció educativa. Els nostres objectius es refonen en mesurar el processament en nens de preescolar de les comarques gironines, verificar la relació de cada tipus de processament amb les variables esmentades, comprovar si s'estableix un paral·lelisme entre el processament i les aportacions de concepció localitzacionista de les funcions cerebrals en base als nostres resultats, i pautes per a la intervenció pedagògica. Quant al mètode, hem seleccionat una mostra representativa dels nens i nenes matriculats a les escoles publiques de les comarques gironines durant el curs 92/93, mitjançant un mostreig aleatori estratificat i per conglomerats. El tamany real de la mostra és de dos-cents seixanta un subjectes. Els instruments emprats han estat els següents: el Test K-ABC de Kaufman & Kaufman (1983) per a la avaluació del processament; un formulari dirigit als pares per a la recollida de la informació pertinent; entrevistes amb les mestres, i el Test de la Figura Humana de Goodenough. Pel que fa referència als resultats de la nostra recerca i en funció dels objectius proposats, constatem els fets següents. En els nens de preescolar, amb edats d'entre els tres i els sis anys, es constata l'existència dels dos tipus de processament cerebral, sense que es doni un predomini d'un sobre de l'altre; ambdós processaments actuen interrelacionadament. Ambdós tipus de processament milloren a mesura que augmenta l'edat, però es constaten diferències derivades del nivell mental: amb un nivell mental normal s'hi associa una millora d'ambdós processaments, mentre que amb un nivell mental deficient només millora fonamentalment el processament seqüencial. Tanmateix, el processament simultani està més relacionat amb les funcions cognitives complexes i és més nivell mental dependent que el processament seqüencial. Tant les dificultats d'aprenentatge com els problemes de llenguatge predominen en els nens i nenes amb un desequilibri significatiu entre ambdós tipus de processament; les dificultats d'aprenentatge estan més relacionades amb una deficiència del processament simultani, mentre que els problemes de llenguatge es relacionen més amb una deficiència en el processament seqüencial. Els nivells socioculturals baixos es relacionen amb resultats inferiors en ambdós tipus de processament. Per altra part, entre els nens bilingües és més freqüent el processament seqüencial significatiu. El test de la Figura Humana es comporta com un marcador de processament simultani i el nivell atencional com un marcador de la gravetat del problema que afecta al processament i en el següent ordre: nivell mental deficient, dificultats, d'aprenentatge i problemes de llenguatge . Les deficiències atencionals van lligades a deficiències en el processament simultani i a la presencia de patologia. Quant a la dominància manual no es constaten diferències en el processament. Finalment, respecte del sexe només podem aportar que quan un dels dos tipus de processament és deficitari,i es dóna per tant, un desequilibri en el processament, predomina significativament el nombre de nens afectats per sobre del de nenes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The North Atlantic Oscillation (NAO) is an important large-scale atmospheric circulation that influences the European countries climate. This study evaluated NAO impact in air quality in Porto Metropolitan Area (PMA), Portugal, for the period 2002-2006. NAO, air pollutants and meteorological data were statistically analyzed. All data were obtained from PMA Weather Station, PMA Air Quality Stations and NOAA analysis. Two statistical methods were applied in different time scale : principal component and correlation coefficient. Annual time scale, using multivariate analysis (PCA, principal component analysis), were applied in order to identified positive and significant association between air pollutants such as PM10, PM2.5, CO, NO and NO2, with NAO. On the other hand, the correlation coefficient using seasonal time scale were also applied to the same data. The results of PCA analysis present a general negative significant association between the total precipitation and NAO, in Factor 1 and 2 (explaining around 70% of the variance), presented in the years of 2002, 2004 and 2005. During the same years, some air pollutants (such as PM10, PM2.5, SO2, NOx and CO) present also a positive association with NAO. The O3 shows as well a positive association with NAP during 2002 and 2004, at 2nd Factor, explaining 30% of the variance. From the seasonal analysis using correlation coefficient, it was found significant correlation between PM10 (0.72., p<0.05, in 2002), PM2.5 (0 74, p<0.05, in 2004), and SO2 (0.78, p<0.01, in 2002) with NAO during March-December (no winter period) period. Significant associations between air pollutants and NAO were also verified in the winter period (December to April) mainly with ozone (2005, r=-0.55, p.<0.01). Once that human health and hospital morbidities may be affected by air pollution, the results suggest that NAO forecast can be an important tool to prevent them, in the Iberian Peninsula and specially Portugal.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A collection of 24 seawaters from various worldwide locations and differing depth was culled to measure their chlorine isotopic composition (delta(37)Cl). These samples cover all the oceans and large seas: Atlantic, Pacific, Indian and Antarctic oceans, Mediterranean and Red seas. This collection includes nine seawaters from three depth profiles down to 4560 mbsl. The standard deviation (2sigma) of the delta(37)Cl of this collection is +/-0.08 parts per thousand, which is in fact as large as our precision of measurement ( +/- 0.10 parts per thousand). Thus, within error, oceanic waters seem to be an homogeneous reservoir. According to our results, any seawater could be representative of Standard Mean Ocean Chloride (SMOC) and could be used as a reference standard. An extended international cross-calibration over a large range of delta(37)Cl has been completed. For this purpose, geological fluid samples of various chemical compositions and a manufactured CH3Cl gas sample, with delta(37)Cl from about -6 parts per thousand to +6 parts per thousand have been compared. Data were collected by gas source isotope ratio mass spectrometry (IRMS) at the Paris, Reading and Utrecht laboratories and by thermal ionization mass spectrometry (TIMS) at the Leeds laboratory. Comparison of IRMS values over the range -5.3 parts per thousand to +1.4 parts per thousand plots on the Y=X line, showing a very good agreement between the three laboratories. On 11 samples, the trend line between Paris and Reading Universities is: delta(37)Cl(Reading)= (1.007 +/- 0.009)delta(37)Cl(Paris) - (0.040 +/- 0.025), with a correlation coefficient: R-2 = 0.999. TIMS values from Leeds University have been compared to IRMS values from Paris University over the range -3.0 parts per thousand to +6.0 parts per thousand. On six samples, the agreement between these two laboratories, using different techniques is good: delta(37)Cl(Leeds)=(1.052 +/- 0.038)delta(37)Cl(Paris) + (0.058 +/- 0.099), with a correlation coefficient: R-2 = 0.995. The present study completes a previous cross-calibration between the Leeds and Reading laboratories to compare TIMS and IRMS results (Anal. Chem. 72 (2000) 2261). Both studies allow a comparison of IRMS and TIMS techniques between delta(37)Cl values from -4.4 parts per thousand to +6.0 parts per thousand and show a good agreement: delta(37)Cl(TIMS)=(1.039 +/- 0.023)delta(37)Cl(IRMS)+(0.059 +/- 0.056), with a correlation coefficient: R-2 = 0.996. Our study shows that, for fluid samples, if chlorine isotopic compositions are near 0 parts per thousand, their measurements either by IRMS or TIMS will give comparable results within less than +/- 0.10 parts per thousand, while for delta(37)Cl values as far as 10 parts per thousand (either positive or negative) from SMOC, both techniques will agree within less than +/- 0.30 parts per thousand. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cryoturbated Upper Chalk is a dichotomous porous medium wherein the intra-fragment porosity provides water storage and the inter-fragment porosity provides potential pathways for relatively rapid flow near saturation. Chloride tracer movement through 43 cm long and 45 cm diameter undisturbed chalk columns was studied at water application rates of 0.3, 1.0, and 1.5 cm h(-1). Microscale heterogeneity in effluent was recorded using a grid collection system consisting of 98 funnel-shaped cells each 3.5 cm in diameter. The total porosity of the columns was 0.47 +/- 0.02 m(3) m(-3), approximately 13% of pores were >15 mu m diameter, and the saturated hydraulic conductivity was 12.66 +/- 1.31 m day(-1). Although the column remained unsaturated during the leaching even at all application rates, proportionate flow through macropores increased as the application rate decreased. The number of dry cells (with 0 ml of effluent) increased as application rate decreased. Half of the leachate was collected from 15, 19 and 22 cells at 0.3, 1.0, 1.5 cm h(-1) application rates respectively. Similar breakthrough curves (BTCs) were obtained at all three application rates when plotted as a function of cumulative drainage, but they were distinctly different when plotted as a function of time. The BTCs indicate that the columns have similar drainage requirement irrespective of application rates, as the rise to the maxima (C/C-o) is almost similar. However, the time required to achieve that leaching requirement varies with application rates, and residence time was less in the case of a higher application rate. A two-region convection-dispersion model was used to describe the BTCs and fitted well (r(2) = 0.97-0-99). There was a linear relationship between dispersion coefficient and pore water velocity (correlation coefficient r = 0.95). The results demonstrate the microscale heterogeneity of hydrodynamic properties in the Upper Chalk. Copyright (C) 2007 John Wiley & Sons, Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The extent to which airborne particles penetrate into the human respiratory system is determined mainly by their size, with possible health effects. The research over the scientific evidence of the role of airborne particles in adverse health effects has been intensified in recent years. In the present study, seasonal variations of PM10 and its relation with anthropogenic activities have been studied by using the data from UK National Air Quality Archive over Reading, UK. The diurnal variation of PM10 shows a morning peak during 7:00-10:00 LT and an evening peak during 19:00-22:00 LT. 3 The variation between 12:00 and 17:00 LT remains more or less steady for PM10 with the minimum value of similar to 16 mu g m(-3). PM10 and black smoke (BS) concentrations during weekdays were found to be high compared to weekends. A reduction in the concentration of PM10 has been found during the Christmas holidays compared to normal days during December. Seasonal variations of PM10 showed high values during spring compared to other seasons. A linear relationship has been found between PM10 and NO, during March, July, November and December suggesting that most of the PM10 is due to local traffic exhaust emissions. PM10 and SO2 concentrations showed positive correlation with the correlation coefficient of R-2 = 0.65 over the study area. Seasonal variations of SO2 and NOx showed high concentrations during winter and low concentrations during spring. Fraction of BS in PM10 has been found to be 50% during 2004 over the study area. (C) 2005 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose a novel method for scoring the accuracy of protein binding site predictions – the Binding-site Distance Test (BDT) score. Recently, the Matthews Correlation Coefficient (MCC) has been used to evaluate binding site predictions, both by developers of new methods and by the assessors for the community wide prediction experiment – CASP8. Whilst being a rigorous scoring method, the MCC does not take into account the actual 3D location of the predicted residues from the observed binding site. Thus, an incorrectly predicted site that is nevertheless close to the observed binding site will obtain an identical score to the same number of nonbinding residues predicted at random. The MCC is somewhat affected by the subjectivity of determining observed binding residues and the ambiguity of choosing distance cutoffs. By contrast the BDT method produces continuous scores ranging between 0 and 1, relating to the distance between the predicted and observed residues. Residues predicted close to the binding site will score higher than those more distant, providing a better reflection of the true accuracy of predictions. The CASP8 function predictions were evaluated using both the MCC and BDT methods and the scores were compared. The BDT was found to strongly correlate with the MCC scores whilst also being less susceptible to the subjectivity of defining binding residues. We therefore suggest that this new simple score is a potentially more robust method for future evaluations of protein-ligand binding site predictions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The formulation of a new process-based crop model, the general large-area model (GLAM) for annual crops is presented. The model has been designed to operate on spatial scales commensurate with those of global and regional climate models. It aims to simulate the impact of climate on crop yield. Procedures for model parameter determination and optimisation are described, and demonstrated for the prediction of groundnut (i.e. peanut; Arachis hypogaea L.) yields across India for the period 1966-1989. Optimal parameters (e.g. extinction coefficient, transpiration efficiency, rate of change of harvest index) were stable over space and time, provided the estimate of the yield technology trend was based on the full 24-year period. The model has two location-specific parameters, the planting date, and the yield gap parameter. The latter varies spatially and is determined by calibration. The optimal value varies slightly when different input data are used. The model was tested using a historical data set on a 2.5degrees x 2.5degrees grid to simulate yields. Three sites are examined in detail-grid cells from Gujarat in the west, Andhra Pradesh towards the south, and Uttar Pradesh in the north. Agreement between observed and modelled yield was variable, with correlation coefficients of 0.74, 0.42 and 0, respectively. Skill was highest where the climate signal was greatest, and correlations were comparable to or greater than correlations with seasonal mean rainfall. Yields from all 35 cells were aggregated to simulate all-India yield. The correlation coefficient between observed and simulated yields was 0.76, and the root mean square error was 8.4% of the mean yield. The model can be easily extended to any annual crop for the investigation of the impacts of climate variability (or change) on crop yield over large areas. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Grass-based diets are of increasing social-economic importance in dairy cattle farming, but their low supply of glucogenic nutrients may limit the production of milk. Current evaluation systems that assess the energy supply and requirements are based on metabolisable energy (ME) or net energy (NE). These systems do not consider the characteristics of the energy delivering nutrients. In contrast, mechanistic models take into account the site of digestion, the type of nutrient absorbed and the type of nutrient required for production of milk constituents, and may therefore give a better prediction of supply and requirement of nutrients. The objective of the present study is to compare the ability of three energy evaluation systems, viz. the Dutch NE system, the agricultural and food research council (AFRC) ME system, and the feed into milk (FIM) ME system, and of a mechanistic model based on Dijkstra et al. [Simulation of digestion in cattle fed sugar cane: prediction of nutrient supply for milk production with locally available supplements. J. Agric. Sci., Cambridge 127, 247-60] and Mills et al. [A mechanistic model of whole-tract digestion and methanogenesis in the lactating dairy cow: model development, evaluation and application. J. Anim. Sci. 79, 1584-97] to predict the feed value of grass-based diets for milk production. The dataset for evaluation consists of 41 treatments of grass-based diets (at least 0.75 g ryegrass/g diet on DM basis). For each model, the predicted energy or nutrient supply, based on observed intake, was compared with predicted requirement based on observed performance. Assessment of the error of energy or nutrient supply relative to requirement is made by calculation of mean square prediction error (MSPE) and by concordance correlation coefficient (CCC). All energy evaluation systems predicted energy requirement to be lower (6-11%) than energy supply. The root MSPE (expressed as a proportion of the supply) was lowest for the mechanistic model (0.061), followed by the Dutch NE system (0.082), FIM ME system (0.097) and AFRCME system(0.118). For the energy evaluation systems, the error due to overall bias of prediction dominated the MSPE, whereas for the mechanistic model, proportionally 0.76 of MSPE was due to random variation. CCC analysis confirmed the higher accuracy and precision of the mechanistic model compared with energy evaluation systems. The error of prediction was positively related to grass protein content for the Dutch NE system, and was also positively related to grass DMI level for all models. In conclusion, current energy evaluation systems overestimate energy supply relative to energy requirement on grass-based diets for dairy cattle. The mechanistic model predicted glucogenic nutrients to limit performance of dairy cattle on grass-based diets, and proved to be more accurate and precise than the energy systems. The mechanistic model could be improved by allowing glucose maintenance and utilization requirements parameters to be variable. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Accurately and reliably identifying the actual number of clusters present with a dataset of gene expression profiles, when no additional information on cluster structure is available, is a problem addressed by few algorithms. GeneMCL transforms microarray analysis data into a graph consisting of nodes connected by edges, where the nodes represent genes, and the edges represent the similarity in expression of those genes, as given by a proximity measurement. This measurement is taken to be the Pearson correlation coefficient combined with a local non-linear rescaling step. The resulting graph is input to the Markov Cluster (MCL) algorithm, which is an elegant, deterministic, non-specific and scalable method, which models stochastic flow through the graph. The algorithm is inherently affected by any cluster structure present, and rapidly decomposes a graph into cohesive clusters. The potential of the GeneMCL algorithm is demonstrated with a 5730 gene subset (IGS) of the Van't Veer breast cancer database, for which the clusterings are shown to reflect underlying biological mechanisms. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Because of the importance and potential usefulness of construction market statistics to firms and government, consistency between different sources of data is examined with a view to building a predictive model of construction output using construction data alone. However, a comparison of Department of Trade and Industry (DTI) and Office for National Statistics (ONS) series shows that the correlation coefcient (used as a measure of consistency) of the DTI output and DTI orders data and the correlation coefficient of the DTI output and ONS output data are low. It is not possible to derive a predictive model of DTI output based on DTI orders data alone. The question arises whether or not an alternative independent source of data may be used to predict DTI output data. Independent data produced by Emap Glenigan (EG), based on planning applications, potentially offers such a source of information. The EG data records the value of planning applications and their planned start and finish dates. However, as this data is ex ante and is not correlated with DTI output it is not possible to use this data to describe the volume of actual construction output. Nor is it possible to use the EG planning data to predict DTI construc-tion orders data. Further consideration of the issues raised reveal that it is not practically possible to develop a consistent predictive model of construction output using construction statistics gathered at different stages in the development process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Several pixel-based people counting methods have been developed over the years. Among these the product of scale-weighted pixel sums and a linear correlation coefficient is a popular people counting approach. However most approaches have paid little attention to resolving the true background and instead take all foreground pixels into account. With large crowds moving at varying speeds and with the presence of other moving objects such as vehicles this approach is prone to problems. In this paper we present a method which concentrates on determining the true-foreground, i.e. human-image pixels only. To do this we have proposed, implemented and comparatively evaluated a human detection layer to make people counting more robust in the presence of noise and lack of empty background sequences. We show the effect of combining human detection with a pixel-map based algorithm to i) count only human-classified pixels and ii) prevent foreground pixels belonging to humans from being absorbed into the background model. We evaluate the performance of this approach on the PETS 2009 dataset using various configurations of the proposed methods. Our evaluation demonstrates that the basic benchmark method we implemented can achieve an accuracy of up to 87% on sequence ¿S1.L1 13-57 View 001¿ and our proposed approach can achieve up to 82% on sequence ¿S1.L3 14-33 View 001¿ where the crowd stops and the benchmark accuracy falls to 64%.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The differential phase (ΦDP) measured by polarimetric radars is recognized to be a very good indicator of the path integrated by rain. Moreover, if a linear relationship is assumed between the specific differential phase (KDP) and the specific attenuation (AH) and specific differential attenuation (ADP), then attenuation can easily be corrected. The coefficients of proportionality, γH and γDP, are, however, known to be dependent in rain upon drop temperature, drop shapes, drop size distribution, and the presence of large drops causing Mie scattering. In this paper, the authors extensively apply a physically based method, often referred to as the “Smyth and Illingworth constraint,” which uses the constraint that the value of the differential reflectivity ZDR on the far side of the storm should be low to retrieve the γDP coefficient. More than 30 convective episodes observed by the French operational C-band polarimetric Trappes radar during two summers (2005 and 2006) are used to document the variability of γDP with respect to the intrinsic three-dimensional characteristics of the attenuating cells. The Smyth and Illingworth constraint could be applied to only 20% of all attenuated rays of the 2-yr dataset so it cannot be considered the unique solution for attenuation correction in an operational setting but is useful for characterizing the properties of the strongly attenuating cells. The range of variation of γDP is shown to be extremely large, with minimal, maximal, and mean values being, respectively, equal to 0.01, 0.11, and 0.025 dB °−1. Coefficient γDP appears to be almost linearly correlated with the horizontal reflectivity (ZH), differential reflectivity (ZDR), and specific differential phase (KDP) and correlation coefficient (ρHV) of the attenuating cells. The temperature effect is negligible with respect to that of the microphysical properties of the attenuating cells. Unusually large values of γDP, above 0.06 dB °−1, often referred to as “hot spots,” are reported for 15%—a nonnegligible figure—of the rays presenting a significant total differential phase shift (ΔϕDP > 30°). The corresponding strongly attenuating cells are shown to have extremely high ZDR (above 4 dB) and ZH (above 55 dBZ), very low ρHV (below 0.94), and high KDP (above 4° km−1). Analysis of 4 yr of observed raindrop spectra does not reproduce such low values of ρHV, suggesting that (wet) ice is likely to be present in the precipitation medium and responsible for the attenuation and high phase shifts. Furthermore, if melting ice is responsible for the high phase shifts, this suggests that KDP may not be uniquely related to rainfall rate but can result from the presence of wet ice. This hypothesis is supported by the analysis of the vertical profiles of horizontal reflectivity and the values of conventional probability of hail indexes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Modelling the interaction of terahertz(THz) radiation with biological tissueposes many interesting problems. THzradiation is neither obviously described byan electric field distribution or anensemble of photons and biological tissueis an inhomogeneous medium with anelectronic permittivity that is bothspatially and frequency dependent making ita complex system to model.A three-layer system of parallel-sidedslabs has been used as the system throughwhich the passage of THz radiation has beensimulated. Two modelling approaches havebeen developed a thin film matrix model anda Monte Carlo model. The source data foreach of these methods, taken at the sametime as the data recorded to experimentallyverify them, was a THz spectrum that hadpassed though air only.Experimental verification of these twomodels was carried out using athree-layered in vitro phantom. Simulatedtransmission spectrum data was compared toexperimental transmission spectrum datafirst to determine and then to compare theaccuracy of the two methods. Goodagreement was found, with typical resultshaving a correlation coefficient of 0.90for the thin film matrix model and 0.78 forthe Monte Carlo model over the full THzspectrum. Further work is underway toimprove the models above 1 THz.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aluminium is not a physiological component of the breast but has been measured recently in human breast tissues and breast cyst fluids at levels above those found in blood serum or milk. Since the presence of aluminium can lead to iron dyshomeostasis, levels of aluminium and iron-binding proteins (ferritin, transferrin) were measured in nipple aspirate fluid (NAF), a fluid present in the breast duct tree and mirroring the breast microenvironment. NAFs were collected noninvasively from healthy women (NoCancer; n = 16) and breast cancer-affected women (Cancer; n = 19), and compared with levels in serum (n = 15) and milk (n = 45) from healthy subjects. The mean level of aluminium, measured by ICP-mass spectrometry, was significantly higher in Cancer NAF (268.4 ± 28.1 μg l−1; n = 19) than in NoCancer NAF (131.3 ± 9.6 μg l−1; n = 16; P < 0.0001). The mean level of ferritin, measured through immunoassay, was also found to be higher in Cancer NAF (280.0 ± 32.3 μg l−1) than in NoCancer NAF (55.5 ± 7.2 μg l−1), and furthermore, a positive correlation was found between levels of aluminium and ferritin in the Cancer NAF (correlation coefficient R = 0.94, P < 0.001). These results may suggest a role for raised levels of aluminium and modulation of proteins that regulate iron homeostasis as biomarkers for identification of women at higher risk of developing breast cancer. The reasons for the high levels of aluminium in NAF remain unknown but possibilities include either exposure to aluminium-based antiperspirant salts in the adjacent underarm area and/or preferential accumulation of aluminium by breast tissues.