890 resultados para Proxy.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We examined the effect of anterior ischemic optic neuropathy (AION) on the activity of intrinsically photosensitive retinal ganglion cells (ipRGCs) using the pupil as proxy. Eighteen patients with AION (10 unilateral, 8 bilateral) and 29 age-matched control subjects underwent chromatic pupillometry. Red and blue light stimuli increasing in 0.5 log steps were presented to each eye independently under conditions of dark and light adaptation. The recorded pupil contraction was plotted against stimulus intensity to generate scotopic and photopic response curves for assessment of synaptically-mediated ipRGC activity. Bright blue light stimuli presented monocularly and binocularly were used for melanopsin activation. The post-stimulus pupil size (PSPS) at the 6th second following stimulus offset was the marker of intrinsic ipRGC activity. Finally, questionnaires were administered to assess the influence of ipRGCs on sleep. The pupil response and PSPS to all monocularly-presented light stimuli were impaired in AION eyes, indicating ipRGC dysfunction. To binocular light stimulation, the PSPS of AION patients was similar to that of controls. There was no difference in the sleep habits of the two groups. Thus after ischemic injury to one or both optic nerves, the summated intrinsic ipRGC activity is preserved when both eyes receive adequate light exposure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Using a database of 2,263 responses to R&D public calls in Catalonia, during the period 2007–2010, this paper proceeds to analyse the potential interaction of the territorial and policy dimensions with the propensity to apply for, and be awarded, a public R&D subsidy. Controlling for characteristics at the firm and project level, we estimate models using a twostep procedure. In the first step, our results suggest that large firms which export and which belong to high-tech manufactures are more likely to participate in a public R&D call. Furthermore, both urban location and past experience of such calls have a positive effect. Our territorial proxy of information spillovers shows a positive sign, but this is only significant at intra-industry level. Membership of one of the sectors prioritized by the Catalan government, perhaps surprisingly, does not have a significant impact. In the second step, our results show that cooperative projects, SMEs or old firms shows a positive effect on the probability of obtaining a public subsidy. Finally, the cluster policy does not show a clear relationship with the public R&D call, suggesting that cluster policies and R&D subsidies follow different goals. Our results are in line with previous results in the literature, but they highlight the unequal territorial distribution of the firms which apply and the fact that policymakers should interlink the decision criteria for their public call with other policies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Economic impacts from floods have been increasing over recent decades, a fact often attributed to a changing climate. On the other hand, there is now a significant body of scientific scholarship all pointing towards increasing concentrations and values of assets as the principle cause of the increasing cost of natural disasters. This holds true for a variety of perils and across different jurisdictions. With this in mind, this paper examines the time history of insured losses from floods in Spain between 1971 and 2008. It as- sesses whether any discernible residual signal remains after adjusting the data for the increase in the number and value of insured assets over this period of time. Data on insured losses from floods were sourced from Consorcio de Com- pensacíon de Seguros (CCS). Although a public institution, CCS compensates homeowners for the damage produced by floods, and thus plays a role similar to that of a private insurance company. Insured losses were adjusted using two proxy measures: first, changes in the total amount of annual surcharges (premiums) paid by customers to CCS, and secondly, changes in the total value of dwellings per year. The adjusted data reveals no significant trend over the period 1971-2008 and serves again to confirm that at this juncture, societal in- fluences remain the prime factors driving insured and economic losses from natural disasters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: In contrast to obesity, information on the health risks of underweight is sparse. We examined the long-term association between underweight and mortality by considering factors possibly influencing this relationship. METHODS: We included 31,578 individuals aged 25-74 years, who participated in population based health studies between 1977 and 1993 and were followed-up for survival until 2008 by record linkage with the Swiss National Cohort (SNC). Body Mass Index (BMI) was calculated from measured (53% of study population) or self-reported height and weight. Underweight was defined as BMI < 18.5 kg/m2. Cox regression models were used to determine mortality Hazard Ratios (HR) of underweight vs. normal weight (BMI 18.5- < 25.0 kg/m2). Covariates were study, sex, smoking, healthy eating proxy, sports frequency, and educational level. RESULTS: Underweight individuals represented 3.0% of the total study population (n = 945), and were mostly women (89.9%). Compared to normal weight, underweight was associated with increased all-cause mortality (HR: 1.37; 95% CI: 1.14-1.65). Increased risk was apparent in both sexes, regardless of smoking status, and mainly driven by excess death from external causes (HR: 3.18; 1.96-5.17), but not cancer, cardiovascular or respiratory diseases. The HR were 1.16 (0.88-1.53) in studies with measured BMI and 1.59 (1.24-2.05) with self-reported BMI. CONCLUSIONS: The increased risk of dying of underweight people was mainly due to an increased mortality risk from external causes. Using self-reported BMI may lead to an overestimation of mortality risk associated with underweight.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVES: To investigate, using a Mendelian randomisation approach, whether heavier smoking is associated with a range of regional adiposity phenotypes, in particular those related to abdominal adiposity. DESIGN: Mendelian randomisation meta-analyses using a genetic variant (rs16969968/rs1051730 in the CHRNA5-CHRNA3-CHRNB4 gene region) as a proxy for smoking heaviness, of the associations of smoking heaviness with a range of adiposity phenotypes. PARTICIPANTS: 148 731 current, former and never-smokers of European ancestry aged ≥16 years from 29 studies in the consortium for Causal Analysis Research in Tobacco and Alcohol (CARTA). PRIMARY OUTCOME MEASURES: Waist and hip circumferences, and waist-hip ratio. RESULTS: The data included up to 66 809 never-smokers, 43 009 former smokers and 38 913 current daily cigarette smokers. Among current smokers, for each extra minor allele, the geometric mean was lower for waist circumference by -0.40% (95% CI -0.57% to -0.22%), with effects on hip circumference, waist-hip ratio and body mass index (BMI) being -0.31% (95% CI -0.42% to -0.19), -0.08% (-0.19% to 0.03%) and -0.74% (-0.96% to -0.51%), respectively. In contrast, among never-smokers, these effects were higher by 0.23% (0.09% to 0.36%), 0.17% (0.08% to 0.26%), 0.07% (-0.01% to 0.15%) and 0.35% (0.18% to 0.52%), respectively. When adjusting the three central adiposity measures for BMI, the effects among current smokers changed direction and were higher by 0.14% (0.05% to 0.22%) for waist circumference, 0.02% (-0.05% to 0.08%) for hip circumference and 0.10% (0.02% to 0.19%) for waist-hip ratio, for each extra minor allele. CONCLUSIONS: For a given BMI, a gene variant associated with increased cigarette consumption was associated with increased waist circumference. Smoking in an effort to control weight may lead to accumulation of central adiposity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is an ongoing debate on which are the determinants of CAP reform. The economic environment has not been contemplated as a direct determinant of CAP reform but its proxy, the budget, has not only been looked at as such but underlined as a key cause of CAP reform. This paper argues, however, that the budget does not affect the modus operandi of the CAP. It affects the quantity of support each farmer is going to get and sometimes even the timing of the reform, but not the form it is going to receive it. Other CAP determinants and international negotiations in particular, have an impact on the substance of CAP reform. This hypothesis is not contradicted by an analysis of CAP 2013 changes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tämä työ esittelee uuden tarjota paikasta riippuvaa tietoa langattomien tietoverkkojen käyttäjille. Tieto välitetään jokaiselle käyttäjälle tietämättä mitään käyttäjän henkilöllisyydestä. Sovellustason protokollaksi valittiin HTTP, joka mahdollistaa tämän järjestelmän saattaa tietoa perille useimmille käyttäjille, jotka käyttävät hyvinkin erilaisia päätelaitteita. Tämä järjestelmä toimii sieppaavan www-liikenteen välityspalvelimen jatkeena. Erilaisten tietokantojen sisällä on perusteella järjestelmä päättää välitetäänkö tietoa vai ei. Järjestelmä sisältää myös yksinkertaisen ohjelmiston käyttäjien paikantamiseksi yksittäisen tukiaseman tarkkuudella. Vaikka esitetty ratkaisu tähtääkin paikkaan perustuvien mainosten tarjoamiseen, se on helposti muunnettavissa minkä tahansa tyyppisen tiedon välittämiseen käyttäjille.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Weight gain is a major health problem among psychiatric populations. It implicates several receptors and hormones involved in energy balance and metabolism. Phosphoenolpyruvate carboxykinase 1 is a rate-controlling enzyme involved in gluconeogenesis, glyceroneogenesis and cataplerosis and has been related to obesity and diabetes phenotypes in animals and humans. The aim of this study was to investigate the association of phosphoenolpyruvate carboxykinase 1 polymorphisms with metabolic traits in psychiatric patients treated with psychotropic drugs inducing weight gain and in general population samples. One polymorphism (rs11552145G > A) significantly associated with body mass index in the psychiatric discovery sample (n = 478) was replicated in 2 other psychiatric samples (n1 = 168, n2 = 188), with AA-genotype carriers having lower body mass index as compared to G-allele carriers. Stronger associations were found among women younger than 45 years carrying AA-genotype as compared to G-allele carriers (-2.25 kg/m, n = 151, P = 0.009) and in the discovery sample (-2.20 kg/m, n = 423, P = 0.0004). In the discovery sample for which metabolic parameters were available, AA-genotype showed lower waist circumference (-6.86 cm, P = 0.008) and triglycerides levels (-5.58 mg/100 mL, P < 0.002) when compared to G-allele carriers. Finally, waist-to-hip ratio was associated with rs6070157 (proxy of rs11552145, r = 0.99) in a population-based sample (N = 123,865, P = 0.022). Our results suggest an association of rs11552145G > A polymorphism with metabolic-related traits, especially in psychiatric populations and in women younger than 45 years.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Weight gain is a major health problem among psychiatric populations. It implicates several receptors and hormones involved in energy balance and metabolism. Phosphoenolpyruvate carboxykinase 1 is a rate-controlling enzyme involved in gluconeogenesis, glyceroneogenesis and cataplerosis and has been related to obesity and diabetes phenotypes in animals and humans. The aim of this study was to investigate the association of phosphoenolpyruvate carboxykinase 1 polymorphisms with metabolic traits in psychiatric patients treated with psychotropic drugs inducing weight gain and in general population samples. One polymorphism (rs11552145G > A) significantly associated with body mass index in the psychiatric discovery sample (n = 478) was replicated in 2 other psychiatric samples (n1 = 168, n2 = 188), with AA-genotype carriers having lower body mass index as compared to G-allele carriers. Stronger associations were found among women younger than 45 years carrying AA-genotype as compared to G-allele carriers (-2.25 kg/m, n = 151, P = 0.009) and in the discovery sample (-2.20 kg/m, n = 423, P = 0.0004). In the discovery sample for which metabolic parameters were available, AA-genotype showed lower waist circumference (-6.86 cm, P = 0.008) and triglycerides levels (-5.58 mg/100 mL, P < 0.002) when compared to G-allele carriers. Finally, waist-to-hip ratio was associated with rs6070157 (proxy of rs11552145, r = 0.99) in a population-based sample (N = 123,865, P = 0.022). Our results suggest an association of rs11552145G > A polymorphism with metabolic-related traits, especially in psychiatric populations and in women younger than 45 years.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Past temperature variations are usually inferred from proxy data or estimated using general circulation models. Comparisons between climate estimations derived from proxy records and from model simulations help to better understand mechanisms driving climate variations, and also offer the possibility to identify deficiencies in both approaches. This paper presents regional temperature reconstructions based on tree-ring maximum density series in the Pyrenees, and compares them with the output of global simulations for this region and with regional climate model simulations conducted for the target region. An ensemble of 24 reconstructions of May-to-September regional mean temperature was derived from 22 maximum density tree-ring site chronologies distributed over the larger Pyrenees area. Four different tree-ring series standardization procedures were applied, combining two detrending methods: 300-yr spline and the regional curve standardization (RCS). Additionally, different methodological variants for the regional chronology were generated by using three different aggregation methods. Calibration verification trials were performed in split periods and using two methods: regression and a simple variance matching. The resulting set of temperature reconstructions was compared with climate simulations performed with global (ECHO-G) and regional (MM5) climate models. The 24 variants of May-to-September temperature reconstructions reveal a generally coherent pattern of inter-annual to multi-centennial temperature variations in the Pyrenees region for the last 750 yr. However, some reconstructions display a marked positive trend for the entire length of the reconstruction, pointing out that the application of the RCS method to a suboptimal set of samples may lead to unreliable results. Climate model simulations agree with the tree-ring based reconstructions at multi-decadal time scales, suggesting solar variability and volcanism as the main factors controlling preindustrial mean temperature variations in the Pyrenees. Nevertheless, the comparison also highlights differences with the reconstructions, mainly in the amplitude of past temperature variations and in the 20th century trends. Neither proxy-based reconstructions nor model simulations are able to perfectly track the temperature variations of the instrumental record, suggesting that both approximations still need further improvements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The oxidative potential (OP) of particulate matter has been proposed as a toxicologically relevant metric. This concept is already frequently used for hazard characterization of ambient particles but it is still seldom applied in the occupational field. The objective of this study was to assess the OP in two different types of workplaces and to investigate the relationship between the OP and the physicochemical characteristics of the collected particles. At a toll station, at the entrance of a tunnel ('Tunnel' site), and at three different mechanical yards ('Depot' sites), we assessed particle mass (PM4 and PM2.5 and size distribution), number and surface area, organic and elemental carbon, polycyclic aromatic hydrocarbon (PAH), and four quinones as well as iron and copper concentration. The OP was determined directly on filters without extraction by using the dithiothreitol assay (DTT assay-OP(DTT)). The averaged mass concentration of respirable particles (PM4) at the Tunnel site was about twice the one at the Depot sites (173±103 and 90±36 µg m(-3), respectively), whereas the OP(DTT) was practically identical for all the sites (10.6±7.2 pmol DTT min(-1) μg(-1) at the Tunnel site; 10.4±4.6 pmol DTT min(-1) μg(-1) at the Depot sites). The OP(DTT) of PM4 was mostly present on the smallest PM2.5 fraction (OP(DTT) PM2.5: 10.2±8.1 pmol DTT min(-1) μg(-1); OP(DTT) PM4: 10.5±5.8 pmol DTT min(-1) μg(-1) for all sites), suggesting the presence of redox inactive components in the PM2.5-4 fraction. Although the reactivity was similar at the Tunnel and Depot sites irrespective of the metric chosen (OP(DTT) µg(-1) or OP(DTT) m(-3)), the chemicals associated with OP(DTT) were different between the two types of workplaces. The organic carbon, quinones, and/or metal content (Fe, Cu) were strongly associated with the DTT reactivity at the Tunnel site whereas only Fe and PAH were associated (positively and negatively, respectively) with this reactivity at the Depot sites. These results demonstrate the feasibility of measuring of the OP(DTT) in occupational environments and suggest that the particulate OP(DTT) is integrative of different physicochemical properties. This parameter could be a potentially useful exposure proxy for investigating particle exposure-related oxidative stress and its consequences. Further research is needed mostly to demonstrate the association of OP(DTT) with relevant oxidative endpoints in humans exposed to particles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stable isotope abundances of carbon (δ13C) and nitrogen (δ15N) in the bone of 13 species of marine mammals from the northwest coast of Africa were investigated to assess their positions in the local trophic web and their preferred habitats. Also, samples of primary producers and potential prey species from the study area were collected to characterise the local isotopic landscape. This characterisation indicated that δ13C values increased from offshore to nearshore and that δ15N was a good proxy for trophic level. Therefore, the most coastal species were Monachus monachus and Sousa teuszii, whereas the most pelagic were Physeter macrocephalus and Balaenoptera acutorostrata. δ15N values indicated that marine mammals located at the lowest trophic level were B. acutorostrata, Stenella coeruleoalba and Delphinus sp., and those occupying the highest trophic level were M. monachus and P. macrocephalus. The trophic level of Orcinus orca was similar to that of M. monachus, suggesting that O. orca preys on fish. Conservation of coastal and threatened species (M. monachus and S. teuszii) off NW Africa should be a priority because these species, as the main apex predators, cannot be replaced by other marine mammals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The analysis of the activity of neuronal cultures is considered to be a good proxy of the functional connectivity of in vivo neuronal tissues. Thus, the functional complex network inferred from activity patterns is a promising way to unravel the interplay between structure and functionality of neuronal systems. Here, we monitor the spontaneous self-sustained dynamics in neuronal cultures formed by interconnected aggregates of neurons (clusters). Dynamics is characterized by the fast activation of groups of clusters in sequences termed bursts. The analysis of the time delays between clusters' activations within the bursts allows the reconstruction of the directed functional connectivity of the network. We propose a method to statistically infer this connectivity and analyze the resulting properties of the associated complex networks. Surprisingly enough, in contrast to what has been reported for many biological networks, the clustered neuronal cultures present assortative mixing connectivity values, meaning that there is a preference for clusters to link to other clusters that share similar functional connectivity, as well as a rich-club core, which shapes a"connectivity backbone" in the network. These results point out that the grouping of neurons and the assortative connectivity between clusters are intrinsic survival mechanisms of the culture.