977 resultados para Covariance matrix estimation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The -function and the -function are phenomenological models that are widely used in the context of timing interceptive actions and collision avoidance, respectively. Both models were previously considered to be unrelated to each other: is a decreasing function that provides an estimation of time-to-contact (ttc) in the early phase of an object approach; in contrast, has a maximum before ttc. Furthermore, it is not clear how both functions could be implemented at the neuronal level in a biophysically plausible fashion. Here we propose a new framework the corrected modified Tau function capable of predicting both -type ("") and -type ("") responses. The outstanding property of our new framework is its resilience to noise. We show that can be derived from a firing rate equation, and, as , serves to describe the response curves of collision sensitive neurons. Furthermore, we show that predicts the psychophysical performance of subjects determining ttc. Our new framework is thus validated successfully against published and novel experimental data. Within the framework, links between -type and -type neurons are established. Therefore, it could possibly serve as a model for explaining the co-occurrence of such neurons in the brain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: « Osteo-Mobile Vaud » is a mobile osteoporosis (OP) screening program. The women > 60 years living in the region Vaud will be offered OP screening with new equipment installed in a bus. The main goal is to evaluate the fracture risk with the combination of clinical risk factors (CRF) and informations extracted by a single DXA: bone mineral density (BMD), vertebral fracture assessment (VFA), and micro-architecture (MA) evaluation. MA is yet evaluable in daily practice by the Trabecular Bone Score (TBS) measure. TBS is a novel grey-level texture measurement reflecting bone MA based on the use of experimental variograms of 2D projection images. TBS is very simple to obtain, by reanalyzing a lumbar DXA-scan. TBS has proven to have diagnosis and prognosis value, partially independent of CRF and BMD. A 55-years follow- up is planned. Method: The Osteo-Mobile Vaud cohort (1500 women, > 60 years, living in the region Vaud) started in July 2010. CRF for OP, lumbar spine and hip BMD, VFA by DXA and MA evaluation by TBS are recorded. Preliminary results are reported. Results: In July 31th, we evaluated 510 women: mean age 67 years, BMI 26 kg/m². 72 women had one or more fragility fractures, 39 had vertebral fracture (VFx) grade 2/3. TBS decreases with age (-0.005 / year, p<0.001), and with BMI (-0.011 per kg/m², p<0.001). Correlation between BMD and site matched TBS is low (r=0.4, p<0.001). For the lowest T-score BMD, odds ratio (OR, 95% CI) for VFx grade 2/3 and clinical OP Fx are 1.8 (1.1-2.9) and 2.3 (1.5-3.4). For TBS, age-, BMI- and BMD adjusted ORs (per SD decrease) for VFx grade 2/3 and clinical OP Fx are 1.9 (1.2-3.0) and 1.8 (1.2-2.7). The TBS added value was independent of lumbar spine BMD or the lowest T-score (femoral neck, total hip or lumbar spine). Conclusion: As in the already published studies, these preliminary results confirm the partial independence between BMD and TBS. More importantly, a combination of TBS and BMD may increase significantly the identification of women with prevalent OP Fx. For the first time we are able to have complementary information about fracture (VFA), density (BMD), and micro-architecture (TBS) from a simple, low ionizing radiation and cheap device: DXA. The value of such informations in a screening program will be evaluated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

MOTIVATION: Comparative analyses of gene expression data from different species have become an important component of the study of molecular evolution. Thus methods are needed to estimate evolutionary distances between expression profiles, as well as a neutral reference to estimate selective pressure. Divergence between expression profiles of homologous genes is often calculated with Pearson's or Euclidean distance. Neutral divergence is usually inferred from randomized data. Despite being widely used, neither of these two steps has been well studied. Here, we analyze these methods formally and on real data, highlight their limitations and propose improvements. RESULTS: It has been demonstrated that Pearson's distance, in contrast to Euclidean distance, leads to underestimation of the expression similarity between homologous genes with a conserved uniform pattern of expression. Here, we first extend this study to genes with conserved, but specific pattern of expression. Surprisingly, we find that both Pearson's and Euclidean distances used as a measure of expression similarity between genes depend on the expression specificity of those genes. We also show that the Euclidean distance depends strongly on data normalization. Next, we show that the randomization procedure that is widely used to estimate the rate of neutral evolution is biased when broadly expressed genes are abundant in the data. To overcome this problem, we propose a novel randomization procedure that is unbiased with respect to expression profiles present in the datasets. Applying our method to the mouse and human gene expression data suggests significant gene expression conservation between these species. CONTACT: marc.robinson-rechavi@unil.ch; sven.bergmann@unil.ch SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La biologie de la conservation est communément associée à la protection de petites populations menacées d?extinction. Pourtant, il peut également être nécessaire de soumettre à gestion des populations surabondantes ou susceptibles d?une trop grande expansion, dans le but de prévenir les effets néfastes de la surpopulation. Du fait des différences tant quantitatives que qualitatives entre protection des petites populations et contrôle des grandes, il est nécessaire de disposer de modèles et de méthodes distinctes. L?objectif de ce travail a été de développer des modèles prédictifs de la dynamique des grandes populations, ainsi que des logiciels permettant de calculer les paramètres de ces modèles et de tester des scénarios de gestion. Le cas du Bouquetin des Alpes (Capra ibex ibex) - en forte expansion en Suisse depuis sa réintroduction au début du XXème siècle - servit d?exemple. Cette tâche fut accomplie en trois étapes : En premier lieu, un modèle de dynamique locale, spécifique au Bouquetin, fut développé : le modèle sous-jacent - structuré en classes d?âge et de sexe - est basé sur une matrice de Leslie à laquelle ont été ajoutées la densité-dépendance, la stochasticité environnementale et la chasse de régulation. Ce modèle fut implémenté dans un logiciel d?aide à la gestion - nommé SIM-Ibex - permettant la maintenance de données de recensements, l?estimation automatisée des paramètres, ainsi que l?ajustement et la simulation de stratégies de régulation. Mais la dynamique d?une population est influencée non seulement par des facteurs démographiques, mais aussi par la dispersion et la colonisation de nouveaux espaces. Il est donc nécessaire de pouvoir modéliser tant la qualité de l?habitat que les obstacles à la dispersion. Une collection de logiciels - nommée Biomapper - fut donc développée. Son module central est basé sur l?Analyse Factorielle de la Niche Ecologique (ENFA) dont le principe est de calculer des facteurs de marginalité et de spécialisation de la niche écologique à partir de prédicteurs environnementaux et de données d?observation de l?espèce. Tous les modules de Biomapper sont liés aux Systèmes d?Information Géographiques (SIG) ; ils couvrent toutes les opérations d?importation des données, préparation des prédicteurs, ENFA et calcul de la carte de qualité d?habitat, validation et traitement des résultats ; un module permet également de cartographier les barrières et les corridors de dispersion. Le domaine d?application de l?ENFA fut exploré par le biais d?une distribution d?espèce virtuelle. La comparaison à une méthode couramment utilisée pour construire des cartes de qualité d?habitat, le Modèle Linéaire Généralisé (GLM), montra qu?elle était particulièrement adaptée pour les espèces cryptiques ou en cours d?expansion. Les informations sur la démographie et le paysage furent finalement fusionnées en un modèle global. Une approche basée sur un automate cellulaire fut choisie, tant pour satisfaire aux contraintes du réalisme de la modélisation du paysage qu?à celles imposées par les grandes populations : la zone d?étude est modélisée par un pavage de cellules hexagonales, chacune caractérisée par des propriétés - une capacité de soutien et six taux d?imperméabilité quantifiant les échanges entre cellules adjacentes - et une variable, la densité de la population. Cette dernière varie en fonction de la reproduction et de la survie locale, ainsi que de la dispersion, sous l?influence de la densité-dépendance et de la stochasticité. Un logiciel - nommé HexaSpace - fut développé pour accomplir deux fonctions : 1° Calibrer l?automate sur la base de modèles de dynamique (par ex. calculés par SIM-Ibex) et d?une carte de qualité d?habitat (par ex. calculée par Biomapper). 2° Faire tourner des simulations. Il permet d?étudier l?expansion d?une espèce envahisseuse dans un paysage complexe composé de zones de qualité diverses et comportant des obstacles à la dispersion. Ce modèle fut appliqué à l?histoire de la réintroduction du Bouquetin dans les Alpes bernoises (Suisse). SIM-Ibex est actuellement utilisé par les gestionnaires de la faune et par les inspecteurs du gouvernement pour préparer et contrôler les plans de tir. Biomapper a été appliqué à plusieurs espèces (tant végétales qu?animales) à travers le Monde. De même, même si HexaSpace fut initialement conçu pour des espèces animales terrestres, il pourrait aisément être étndu à la propagation de plantes ou à la dispersion d?animaux volants. Ces logiciels étant conçus pour, à partir de données brutes, construire un modèle réaliste complexe, et du fait qu?ils sont dotés d?une interface d?utilisation intuitive, ils sont susceptibles de nombreuses applications en biologie de la conservation. En outre, ces approches peuvent également s?appliquer à des questions théoriques dans les domaines de l?écologie des populations et du paysage.<br/><br/>Conservation biology is commonly associated to small and endangered population protection. Nevertheless, large or potentially large populations may also need human management to prevent negative effects of overpopulation. As there are both qualitative and quantitative differences between small population protection and large population controlling, distinct methods and models are needed. The aim of this work was to develop theoretical models to predict large population dynamics, as well as computer tools to assess the parameters of these models and to test management scenarios. The alpine Ibex (Capra ibex ibex) - which experienced a spectacular increase since its reintroduction in Switzerland at the beginning of the 20th century - was used as paradigm species. This task was achieved in three steps: A local population dynamics model was first developed specifically for Ibex: the underlying age- and sex-structured model is based on a Leslie matrix approach with addition of density-dependence, environmental stochasticity and culling. This model was implemented into a management-support software - named SIM-Ibex - allowing census data maintenance, parameter automated assessment and culling strategies tuning and simulating. However population dynamics is driven not only by demographic factors, but also by dispersal and colonisation of new areas. Habitat suitability and obstacles modelling had therefore to be addressed. Thus, a software package - named Biomapper - was developed. Its central module is based on the Ecological Niche Factor Analysis (ENFA) whose principle is to compute niche marginality and specialisation factors from a set of environmental predictors and species presence data. All Biomapper modules are linked to Geographic Information Systems (GIS); they cover all operations of data importation, predictor preparation, ENFA and habitat suitability map computation, results validation and further processing; a module also allows mapping of dispersal barriers and corridors. ENFA application domain was then explored by means of a simulated species distribution. It was compared to a common habitat suitability assessing method, the Generalised Linear Model (GLM), and was proven better suited for spreading or cryptic species. Demography and landscape informations were finally merged into a global model. To cope with landscape realism and technical constraints of large population modelling, a cellular automaton approach was chosen: the study area is modelled by a lattice of hexagonal cells, each one characterised by a few fixed properties - a carrying capacity and six impermeability rates quantifying exchanges between adjacent cells - and one variable, population density. The later varies according to local reproduction/survival and dispersal dynamics, modified by density-dependence and stochasticity. A software - named HexaSpace - was developed, which achieves two functions: 1° Calibrating the automaton on the base of local population dynamics models (e.g., computed by SIM-Ibex) and a habitat suitability map (e.g. computed by Biomapper). 2° Running simulations. It allows studying the spreading of an invading species across a complex landscape made of variously suitable areas and dispersal barriers. This model was applied to the history of Ibex reintroduction in Bernese Alps (Switzerland). SIM-Ibex is now used by governmental wildlife managers to prepare and verify culling plans. Biomapper has been applied to several species (both plants and animals) all around the World. In the same way, whilst HexaSpace was originally designed for terrestrial animal species, it could be easily extended to model plant propagation or flying animals dispersal. As these softwares were designed to proceed from low-level data to build a complex realistic model and as they benefit from an intuitive user-interface, they may have many conservation applications. Moreover, theoretical questions in the fields of population and landscape ecology might also be addressed by these approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

L'aquifère du Seeland représente une richesse en ressources hydriques qu'il est impératif de préserver contre tout risque de détérioration. Cet aquifère prolifique est constitué principalement de sédiments alluviaux post-glaciaires (graviers, sables et limons). Il est soumis aux contraintes environnementales des pratiques d'agriculture intensive, du réseau routier, des villes et de leurs activités industrielles. La connaissance optimale de ces ressources est donc primordiale pour leur protection. Dans cette optique, deux sites Kappelen et Grenchen représentatifs de l'aquifère du Seeland ont été étudiés. L'objectif de ce travail est de caractériser d'un point de vue hydrogéophysique l'aquifère au niveau de ces deux sites, c'est-à-dire, comprendre la dynamique des écoulements souterrains par l'application des méthodes électriques de surface associées aux diagraphies en intégrant des méthodes hydrogéologiques. Pour le site de Kappelen, les méthodes électriques de surface ont permis d'identifier les différents faciès géoélectriques en présence et de mettre en évidence leur disposition en une structure tabulaire et horizontale. Il s'agit d'un aquifère libre constitué d'une série de graviers allant jusqu'à 15 m de profondeur reposant sur de la moraine argileuse. Les diagraphies électriques, nucléaires et du fluide ont servis à la détermination des caractéristiques pétrophysiques et hydrauliques de l'aquifère qui contrôlent son comportement hydrodynamique. Les graviers aquifères de Kappelen présentent deux minéraux dominants: quartz et calcite. Les analyses minéralogiques indiquent que ces deux éléments constituent 65 à 75% de la matrice. La porosité totale obtenue par les diagraphies nucléaires varie de 20 à 30 %, et de 22 à 29 % par diagraphies électrique. Avec les faibles valeurs de Gamma Ray ces résultats indiquent que l'aquifère des graviers de Kappelen est dépourvu d'argile minéralogique. La perméabilité obtenue par diagraphies du fluide varie de 3.10-4 à 5.10-2 m/s, et par essais de pompage de 10-4 à 10-2 m/s. Les résultats des analyses granulométriques indiquent une hétérogénéité granulométrique au niveau des graviers aquifères. La fraction de sables, sables très fins, silts et limons constitue de 10 à 40 %. Ces éléments jouent un rôle important dans le comportement hydraulique de l'aquifère. La porosité efficace de 11 à 25% estimée à partir des résultats des analyses granulométriques suppose que les zones les plus perméables correspondent aux zones les plus graveleuses du site. Etablie sur le site de Kappelen, cette méthodologie a été utilisée sur le site de Grenchen. Les méthodes électriques de surface indiquent que l'aquifère captif de Grenchen est constitué des sables silteux comprenant des passages sableux, encadrés par des silts argileux imperméables. L'aquifère de Grenchen est disposé dans une structure relativement tabulaire et horizontale. Son épaisseur totale peut atteindre les 25 m vers le sud et le sud ouest ou les passages sableux sont les plus importants. La détermination des caractéristiques pétrophysiques et hydrauliques s'est faite à l'aide des diagraphies. Les intensités Gamma Ray varient de 30 à 100 cps, les plus fortes valeurs n'indiquent qu'une présence d'éléments argileux mais pas de bancs d'argile. Les porosités totales de 15 à 25% et les densités globales de 2.25 à 2.45 g/cm3 indiquent que la phase minérale (matrice) est composée essentiellement de quartz et de calcaire. Les densités de matrice varient entre 2.65 et 2.75 g/cm3. La perméabilité varie de 2 10-6 à 5 10-4 m/s. La surestimation des porosités totales à partir des diagraphies électriques de 25 à 42% est due à la présence d'argiles. -- The vast alluvial Seeland aquifer system in northwestern Switzerland is subjected to environmental challenges due to intensive agriculture, roads, cities and industrial activities. Optimal knowledge of the hydrological resources of this aquifer system is therefore important for their protection. Two representative sites, Kappelen and Grenchen, of the Seeland aquifer were investigated using surface-based geoelectric methods and geophysical borehole logging methods. By integrating of hydrogeological and hydrogeophysical methods, a reliable characterization of the aquifer system at these two sites can be performed in order to better understand the governing flow and transport process. At the Kappelen site, surface-based geoelectric methods allowed to identify various geoelectric facies and highlighted their tabular and horizontal structure. It is an unconfined aquifer made up of 15 m thick gravels with an important sandy fraction and bounded by a shaly glacial aquitard. Electrical and nuclear logging measurements allow for constraining the petrophysical and hydrological parameters of saturated gravels. Results indicate that in agreement with mineralogical analyses, matrix of the probed formations is dominated by quartz and calcite with densities of 2.65 and 2.71 g/cc, respectively. These two minerals constitute approximately 65 to 75 % of the mineral matrix. Matrix density values vary from 2.60 to 2.75 g/cc. Total porosity values obtained from nuclear logs range from 20 to 30 % and are consistent with those obtained from electrical logs ranging from 22 to 29 %. Together with the inherently low natural gamma radiation and the matrix density values obtained from other nuclear logging measurements, this indicates that at Kappelen site the aquifer is essentially devoid of clay. Hydraulic conductivity values obtained by the Dilution Technique vary between 3.10-4 and 5.10-2 m/s, while pumping tests give values ranging from 10-4 to 10-2 m/s. Grain size analysis of gravel samples collected from boreholes cores reveal significant granulometric heterogeneity of these deposits. Calculations based on these granulometric data have shown that the sand-, silt- and shale-sized fractions constitute between 10 and 40 % of the sample mass. The presence of these fine elements in general and their spatial distribution in particular are important as they largely control the distribution of the total and effective porosity as well as the hydraulic conductivity. Effective porosity values ranging from 11 to 25% estimated from grain size analyses indicate that the zones of higher hydraulic conductivity values correspond to the zones dominated by gravels. The methodology established at the Kappelen site was then applied to the Grenchen site. Results from surface-based geoelectric measurements indicate that it is a confined aquifer made up predominantly of shaly sands with intercalated sand lenses confined impermeable shally clay. The Grenchen confined aquifer has a relatively tabular and horizontal structure with a maximum thickness of 25 m in the south and the southwest with important sand passages. Petrophysical and hydrological characteristics were performed using electrical and nuclear logging. Natural gamma radiation values ranging from 30 to 100 cps indicate presence of a clay fraction but not of pure clay layers. Total porosity values obtained from electrical logs vary form 25 to 42%, whereas those obtained from nuclear logs values vary from 15 to 25%. This over-estimation confirms presences of clays. Density values obtained from nuclear logs varying from 2.25 to 2.45 g/cc in conjunction with the total porosity values indicate that the dominating matrix minerals are quartz and calcite. Matrix density values vary between 2.65 and 2.75 g/cc. Hydraulic conductivity values obtained by the Dilution Technique vary from 2 10-6 to 5 10-4 m/s.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gas-liquid mass transfer is an important issue in the design and operation of many chemical unit operations. Despite its importance, the evaluation of gas-liquid mass transfer is not straightforward due to the complex nature of the phenomena involved. In this thesis gas-liquid mass transfer was evaluated in three different gas-liquid reactors in a traditional way by measuring the volumetric mass transfer coefficient (kLa). The studied reactors were a bubble column with a T-junction two-phase nozzle for gas dispersion, an industrial scale bubble column reactor for the oxidation of tetrahydroanthrahydroquinone and a concurrent downflow structured bed.The main drawback of this approach is that the obtained correlations give only the average volumetric mass transfer coefficient, which is dependent on average conditions. Moreover, the obtained correlations are valid only for the studied geometry and for the chemical system used in the measurements. In principle, a more fundamental approach is to estimate the interfacial area available for mass transfer from bubble size distributions obtained by solution of population balance equations. This approach has been used in this thesis by developing a population balance model for a bubble column together with phenomenological models for bubble breakage and coalescence. The parameters of the bubble breakage rate and coalescence rate models were estimated by comparing the measured and calculated bubble sizes. The coalescence models always have at least one experimental parameter. This is because the bubble coalescence depends on liquid composition in a way which is difficult to evaluate using known physical properties. The coalescence properties of some model solutions were evaluated by measuring the time that a bubble rests at the free liquid-gas interface before coalescing (the so-calledpersistence time or rest time). The measured persistence times range from 10 msup to 15 s depending on the solution. The coalescence was never found to be instantaneous. The bubble oscillates up and down at the interface at least a coupleof times before coalescence takes place. The measured persistence times were compared to coalescence times obtained by parameter fitting using measured bubble size distributions in a bubble column and a bubble column population balance model. For short persistence times, the persistence and coalescence times are in good agreement. For longer persistence times, however, the persistence times are at least an order of magnitude longer than the corresponding coalescence times from parameter fitting. This discrepancy may be attributed to the uncertainties concerning the estimation of energy dissipation rates, collision rates and mechanisms and contact times of the bubbles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Software engineering is criticized as not being engineering or 'well-developed' science at all. Software engineers seem not to know exactly how long their projects will last, what they will cost, and will the software work properly after release. Measurements have to be taken in software projects to improve this situation. It is of limited use to only collect metrics afterwards. The values of the relevant metrics have to be predicted, too. The predictions (i.e. estimates) form the basis for proper project management. One of the most painful problems in software projects is effort estimation. It has a clear and central effect on other project attributes like cost and schedule, and to product attributes like size and quality. Effort estimation can be used for several purposes. In this thesis only the effort estimation in software projects for project management purposes is discussed. There is a short introduction to the measurement issues, and some metrics relevantin estimation context are presented. Effort estimation methods are covered quite broadly. The main new contribution in this thesis is the new estimation model that has been created. It takes use of the basic concepts of Function Point Analysis, but avoids the problems and pitfalls found in the method. It is relativelyeasy to use and learn. Effort estimation accuracy has significantly improved after taking this model into use. A major innovation related to the new estimationmodel is the identified need for hierarchical software size measurement. The author of this thesis has developed a three level solution for the estimation model. All currently used size metrics are static in nature, but this new proposed metric is dynamic. It takes use of the increased understanding of the nature of the work as specification and design work proceeds. It thus 'grows up' along with software projects. The effort estimation model development is not possible without gathering and analyzing history data. However, there are many problems with data in software engineering. A major roadblock is the amount and quality of data available. This thesis shows some useful techniques that have been successful in gathering and analyzing the data needed. An estimation process is needed to ensure that methods are used in a proper way, estimates are stored, reported and analyzed properly, and they are used for project management activities. A higher mechanism called measurement framework is also introduced shortly. The purpose of the framework is to define and maintain a measurement or estimationprocess. Without a proper framework, the estimation capability of an organization declines. It requires effort even to maintain an achieved level of estimationaccuracy. Estimation results in several successive releases are analyzed. It isclearly seen that the new estimation model works and the estimation improvementactions have been successful. The calibration of the hierarchical model is a critical activity. An example is shown to shed more light on the calibration and the model itself. There are also remarks about the sensitivity of the model. Finally, an example of usage is shown.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The optimization of most pesticide and fertilizer applications is based on overall grove conditions. In this work we measurements. Recently, Wei [9, 10] used a terrestrial propose a measurement system based on a ground laser scanner to LIDAR to measure tree height, width and volume developing estimate the volume of the trees and then extrapolate their foliage a set of experiments to evaluate the repeatability and surface in real-time. Tests with pear trees demonstrated that the accuracy of the measurements, obtaining a coefficient of relation between the volume and the foliage can be interpreted as variation of 5.4% and a relative error of 4.4% in the linear with a coefficient of correlation (R) of 0.81 and the foliar estimation of the volume but without real-time capabilities. surface can be estimated with an average error less than 5 %.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: During the last part of the 1990s the chance of surviving breast cancer increased. Changes in survival functions reflect a mixture of effects. Both, the introduction of adjuvant treatments and early screening with mammography played a role in the decline in mortality. Evaluating the contribution of these interventions using mathematical models requires survival functions before and after their introduction. Furthermore, required survival functions may be different by age groups and are related to disease stage at diagnosis. Sometimes detailed information is not available, as was the case for the region of Catalonia (Spain). Then one may derive the functions using information from other geographical areas. This work presents the methodology used to estimate age- and stage-specific Catalan breast cancer survival functions from scarce Catalan survival data by adapting the age- and stage-specific US functions. Methods: Cubic splines were used to smooth data and obtain continuous hazard rate functions. After, we fitted a Poisson model to derive hazard ratios. The model included time as a covariate. Then the hazard ratios were applied to US survival functions detailed by age and stage to obtain Catalan estimations. Results: We started estimating the hazard ratios for Catalonia versus the USA before and after the introduction of screening. The hazard ratios were then multiplied by the age- and stage-specific breast cancer hazard rates from the USA to obtain the Catalan hazard rates. We also compared breast cancer survival in Catalonia and the USA in two time periods, before cancer control interventions (USA 1975–79, Catalonia 1980–89) and after (USA and Catalonia 1990–2001). Survival in Catalonia in the 1980–89 period was worse than in the USA during 1975–79, but the differences disappeared in 1990–2001. Conclusion: Our results suggest that access to better treatments and quality of care contributed to large improvements in survival in Catalonia. On the other hand, we obtained detailed breast cancer survival functions that will be used for modeling the effect of screening and adjuvant treatments in Catalonia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Estimation of glomerular filtration rate (eGFR) using a common formula for both adult and pediatric populations is challenging. Using inulin clearances (iGFRs), this study aims to investigate the existence of a precise age cutoff beyond which the Modification of Diet in Renal Disease (MDRD), the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI), or the Cockroft-Gault (CG) formulas, can be applied with acceptable precision. Performance of the new Schwartz formula according to age is also evaluated. METHOD: We compared 503 iGFRs for 503 children aged between 33 months and 18 years to eGFRs. To define the most precise age cutoff value for each formula, a circular binary segmentation method analyzing the formulas' bias values according to the children's ages was performed. Bias was defined by the difference between iGFRs and eGFRs. To validate the identified cutoff, 30% accuracy was calculated. RESULTS: For MDRD, CKD-EPI and CG, the best age cutoff was ≥14.3, ≥14.2 and ≤10.8 years, respectively. The lowest mean bias and highest accuracy were -17.11 and 64.7% for MDRD, 27.4 and 51% for CKD-EPI, and 8.31 and 77.2% for CG. The Schwartz formula showed the best performance below the age of 10.9 years. CONCLUSION: For the MDRD and CKD-EPI formulas, the mean bias values decreased with increasing child age and these formulas were more accurate beyond an age cutoff of 14.3 and 14.2 years, respectively. For the CG and Schwartz formulas, the lowest mean bias values and the best accuracies were below an age cutoff of 10.8 and 10.9 years, respectively. Nevertheless, the accuracies of the formulas were still below the National Kidney Foundation Kidney Disease Outcomes Quality Initiative target to be validated in these age groups and, therefore, none of these formulas can be used to estimate GFR in children and adolescent populations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Solid tumor growth triggers a wound healing response. Similar to wound healing, fibroblasts in the tumor stroma differentiate into myofibroblasts (also referred to as cancer-associated fibroblasts) primarily, but not exclusively, in response to transforming growth factor-β (TGF-β). Myofibroblasts in turn enhance tumor progression by remodeling the stroma. Among proteases implicated in stroma remodeling, matrix metalloproteinases (MMPs), including MMP-9, play a prominent role. Recent evidence indicates that MMP-9 recruitment to the tumor cell surface enhances tumor growth and invasion. In the present work, we addressed the potential relevance of MMP-9 recruitment to and activity at the surface of fibroblasts. We show that recruitment of MMP-9 to the fibroblast cell surface occurs through its fibronectin-like (FN) domain and that the molecule responsible for the recruitment is lysyl hydroxylase 3 (LH3). Functional assays suggest that both pro- and active MMP-9 trigger α-smooth muscle actin expression in cultured fibroblasts, reflecting myofibroblast differentiation, possibly as a result of TGF-β activation. Moreover, the recombinant FN domain inhibited both MMP-9-induced TGF-β activation and α-smooth muscle actin expression by displacing MMP-9 from the fibroblast cell surface. Together our results uncover LH3 as a new docking receptor of MMP-9 on the fibroblast cell surface and demonstrate that the MMP-9 FN domain is essential for the interaction. They also show that the recombinant FN domain inhibits MMP-9-induced TGF-β activation and fibroblast differentiation, providing a potentially attractive therapeutic reagent toward attenuating tumor progression where MMP-9 activity is strongly implicated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

AIMS: Estimating the effect of a nursing intervention in home-dwelling older adults on the occurrence and course of delirium and concomitant cognitive and functional impairment. METHODS: A randomized clinical pilot trial using a before/after design was conducted with older patients discharged from hospital who had a medical prescription to receive home care. A total of 51 patients were randomized into the experimental group (EG) and 52 patients into the control group (CG). Besides usual home care, nursing interventions were offered by a geriatric nurse specialist to the EG at 48 h, 72 h, 7 days, 14 days, and 21 days after discharge. All patients were monitored for symptoms of delirium using the Confusion Assessment Method. Cognitive and functional statuses were measured with the Mini-Mental State Examination and the Katz and Lawton Index. RESULTS: No statistical differences with regard to symptoms of delirium (p = 0.085), cognitive impairment (p = 0.151), and functional status (p = 0.235) were found between the EG and CG at study entry and at 1 month. After adjustment, statistical differences were found in favor of the EG for symptoms of delirium (p = 0.046), cognitive impairment (p = 0.015), and functional status (p = 0.033). CONCLUSION: Nursing interventions to detect delirium at home are feasible and accepted. The nursing interventions produced a promising effect to improve delirium.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The phyllochron is defined as the time required for the appearance of successive leaves on a plant; this characterises plant growth, development and adaptation to the environment. To check the growth and adaptation in cultivars of strawberry grown intercropped with fig trees, it was estimated the phyllochron in these production systems and in the monocrop. The experiment was conducted in greenhouses at the University of Passo Fundo (28º15'41'' S, 52º24'45'' W and 709 m) from June 8th to September 4th, 2009; this comprised the period of transplant until the 2nd flowering. The cultivars Aromas, Camino Real, Albion, Camarosa and Ventana, which seedlings were originated from the Agrícola LLahuen Nursery in Chile, as well as Festival, Camino Real and Earlibrite, originated from the Viansa S.A. Nursery in Argentina, were grown in white polyethylene bags filled with commercial substrate (Tecnomax®) and evaluated. The treatments were arranged in a randomised block design and four replicates were performed. A linear regression was realized between the leaf number (LN) in the main crown and the accumulated thermal time (ATT). The phyllochron (degree-day leaf-1) was estimated as the inverse of the angular coefficient of the linear regression. The data were submitted to ANOVA, and when significance was observed, the means were compared using the Tukey test (p < 0.05). The mean and standard deviation of phyllochrons of strawberry cultivars intercropped with fig trees varied from 149.35ºC day leaf-1 ± 31.29 in the Albion cultivar to 86.34ºC day leaf-1 ± 34.74 in the Ventana cultivar. Significant differences were observed among cultivars produced in a soilless environment with higher values recorded for Albion (199.96ºC day leaf-1 ± 29.7), which required more degree-days to produce a leaf, while cv. Ventana (85.76ºC day leaf-1 ± 11.51) exhibited a lower phyllochron mean value. Based on these results, Albion requires more degree-days to issue a leaf as compared to cv. Ventana. It was conclude that strawberry cultivars can be grown intercropped with fig trees (cv. Roxo de Valinhos).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Resveratrol has been shown to have beneficial effects on diseases related to oxidant and/or inflammatory processes and extends the lifespan of simple organisms including rodents. The objective of the present study was to estimate the dietary intake of resveratrol and piceid (R&P) present in foods, and to identify the principal dietary sources of these compounds in the Spanish adult population. For this purpose, a food composition database (FCDB) of R&P in Spanish foods was compiled. The study included 40 685 subjects aged 35-64 years from northern and southern regions of Spain who were included in the European Prospective Investigation into Cancer and Nutrition (EPIC)-Spain cohort. Usual food intake was assessed by personal interviews using a computerised version of a validated diet history method. An FCDB with 160 items was compiled. The estimated median and mean of R&P intake were 100 and 933 mg/d respectively. Approximately, 32% of the population did not consume RΠ The most abundant of the four stilbenes studied was trans-piceid (53·6 %), followed by trans-resveratrol (20·9 %), cis-piceid (19·3 %) and cis-resveratrol (6·2 %). The most important source of R&P was wines (98·4 %) and grape and grape juices (1·6 %), whereas peanuts, pistachios and berries contributed to less than 0·01 %. For this reason the pattern of intake of R&P was similar to the wine pattern. This is the first time that R&P intake has been estimated in a Mediterranean country.