910 resultados para content distribution networks
Resumo:
Rocket is a leafy brassicaceous salad crop that encompasses two major genera (Diplotaxis and Eruca) and many different cultivars. Rocket is a rich source of antioxidants and glucosinolates, many of which are produced as secondary products by the plant in response to stress. In this paper we examined the impact of temperature and light stress on several different cultivars of wild and salad rocket. Growth habit of the plants varied in response to stress and with different genotypes, reflecting the wide geographical distribution of the plant and the different environments to which the genera have naturally adapted. Preharvest environmental stress and genotype also had an impact on how well the cultivar was able to resist postharvest senescence, indicating that breeding or selection of senescence-resistant genotypes will be possible in the future. The abundance of key phytonutrients such as carotenoids and glucosinolates are also under genetic control. As genetic resources improve for rocket it will therefore be possible to develop a molecular breeding programme specifically targeted at improving stress resistance and nutritional levels of plant secondary products. Concomitantly, it has been shown in this paper that controlled levels of abiotic stress can potentially improve the levels of chlorophyll, carotenoids and antioxidant activity in this leafy vegetable.
Resumo:
Undirected graphical models are widely used in statistics, physics and machine vision. However Bayesian parameter estimation for undirected models is extremely challenging, since evaluation of the posterior typically involves the calculation of an intractable normalising constant. This problem has received much attention, but very little of this has focussed on the important practical case where the data consists of noisy or incomplete observations of the underlying hidden structure. This paper specifically addresses this problem, comparing two alternative methodologies. In the first of these approaches particle Markov chain Monte Carlo (Andrieu et al., 2010) is used to efficiently explore the parameter space, combined with the exchange algorithm (Murray et al., 2006) for avoiding the calculation of the intractable normalising constant (a proof showing that this combination targets the correct distribution in found in a supplementary appendix online). This approach is compared with approximate Bayesian computation (Pritchard et al., 1999). Applications to estimating the parameters of Ising models and exponential random graphs from noisy data are presented. Each algorithm used in the paper targets an approximation to the true posterior due to the use of MCMC to simulate from the latent graphical model, in lieu of being able to do this exactly in general. The supplementary appendix also describes the nature of the resulting approximation.
Resumo:
The formation of complexes in solutions containing positively charged polyions (polycations) and a variable amount of negatively charged polyions (polyanions) has been investigated by Monte Carlo simulations. The polyions were described as flexible chains of charged hard spheres interacting through a screened Coulomb potential. The systems were analyzed in terms of cluster compositions, structure factors, and radial distribution functions. At 50% charge equivalence or less, complexes involving two polycations and one polyanion were frequent, while closer to charge equivalence, larger clusters were formed. Small and neutral complexes dominated the solution at charge equivalence in a monodisperse system, while larger clusters again dominated the solution when the polyions were made polydisperse. The cluster composition and solution structure were also examined as functions of added salt by varying the electrostatic screening length. The observed formation of clusters could be rationalized by a few simple rules.
Resumo:
An extensive data set of total arsenic analysis for 901 polished (white) grain samples, originating from 10 countries from 4 continents, was compiled. The samples represented the baseline (i.e., not specifically collected from arsenic contaminated areas), and all were for market sale in major conurbations. Median total arsenic contents of rice varied 7-fold, with Egypt (0.04 mg/kg) and India (0.07 mg/kg) having the lowest arsenic content while the U.S. (0.25 mg/kg) and France (0.28 mg/kg) had the highest content. Global distribution of total arsenic in rice was modeled by weighting each country’s arsenic distribution by that country’s contribution to global production. A subset of 63 samples from Bangladesh, China, India, Italy, and the U.S. was analyzed for arsenic species. The relationship between inorganic arsenic content versus total arsenic content significantly differed among countries, with Bangladesh and India having the steepest slope in linear regression, and the U.S. having the shallowest slope. Using country-specific rice consumption data, daily intake of inorganic arsenic was estimated and the associated internal cancer risk was calculated using the U.S. Environmental Protection Agency (EPA) cancer slope. Median excess internal cancer risks posed by inorganic arsenic ranged 30-fold for the 5 countries examined, being 0.7 per 10,000 for Italians to 22 per 10,000 for Bangladeshis, when a 60 kg person was considered.
Resumo:
In this paper we consider the structure of dynamically evolving networks modelling information and activity moving across a large set of vertices. We adopt the communicability concept that generalizes that of centrality which is defined for static networks. We define the primary network structure within the whole as comprising of the most influential vertices (both as senders and receivers of dynamically sequenced activity). We present a methodology based on successive vertex knockouts, up to a very small fraction of the whole primary network,that can characterize the nature of the primary network as being either relatively robust and lattice-like (with redundancies built in) or relatively fragile and tree-like (with sensitivities and few redundancies). We apply these ideas to the analysis of evolving networks derived from fMRI scans of resting human brains. We show that the estimation of performance parameters via the structure tests of the corresponding primary networks is subject to less variability than that observed across a very large population of such scans. Hence the differences within the population are significant.
Resumo:
The aetiology of breast cancer is multifactorial. While there are known genetic predispositions to the disease it is probable that environmental factors are also involved. Recent research has demonstrated a regionally specific distribution of aluminium in breast tissue mastectomies while other work has suggested mechanisms whereby breast tissue aluminium might contribute towards the aetiology of breast cancer. We have looked to develop microwave digestion combined with a new form of graphite furnace atomic absorption spectrometry as a precise, accurate and reproducible method for the measurement of aluminium in breast tissue biopsies. We have used this method to test the thesis that there is a regional distribution of aluminium across the breast in women with breast cancer. Microwave digestion of whole breast tissue samples resulted in clear homogenous digests perfectly suitable for the determination of aluminium by graphite furnace atomic absorption spectrometry. The instrument detection limit for the method was 0.48 μg/L. Method blanks were used to estimate background levels of contamination of 14.80 μg/L. The mean concentration of aluminium across all tissues was 0.39 μg Al/g tissue dry wt. There were no statistically significant regionally specific differences in the content of aluminium. We have developed a robust method for the precise and accurate measurement of aluminium in human breast tissue. There are very few such data currently available in the scientific literature and they will add substantially to our understanding of any putative role of aluminium in breast cancer. While we did not observe any statistically significant differences in aluminium content across the breast it has to be emphasised that herein we measured whole breast tissue and not defatted tissue where such a distribution was previously noted. We are very confident that the method developed herein could now be used to provide accurate and reproducible data on the aluminium content in defatted tissue and oil from such tissues and thereby contribute towards our knowledge on aluminium and any role in breast cancer.
Resumo:
Polymers with the ability to heal themselves could provide access to materials with extended lifetimes in a wide range of applications such as surface coatings, automotive components and aerospace composites. Here we describe the synthesis and characterisation of two novel, stimuli-responsive, supramolecular polymer blends based on π-electron-rich pyrenyl residues and π-electron-deficient, chain-folding aromatic diimides that interact through complementary π–π stacking interactions. Different degrees of supramolecular “cross-linking” were achieved by use of divalent or trivalent poly(ethylene glycol)-based polymers featuring pyrenyl end-groups, blended with a known diimide–ether copolymer. The mechanical properties of the resulting polymer blends revealed that higher degrees of supramolecular “cross-link density” yield materials with enhanced mechanical properties, such as increased tensile modulus, modulus of toughness, elasticity and yield point. After a number of break/heal cycles, these materials were found to retain the characteristics of the pristine polymer blend, and this new approach thus offers a simple route to mechanically robust yet healable materials.
Resumo:
There are well-known difficulties in making measurements of the moisture content of baked goods (such as bread, buns, biscuits, crackers and cake) during baking or at the oven exit; in this paper several sensing methods are discussed, but none of them are able to provide direct measurement with sufficient precision. An alternative is to use indirect inferential methods. Some of these methods involve dynamic modelling, with incorporation of thermal properties and using techniques familiar in computational fluid dynamics (CFD); a method of this class that has been used for the modelling of heat and mass transfer in one direction during baking is summarized, which may be extended to model transport of moisture within the product and also within the surrounding atmosphere. The concept of injecting heat during the baking process proportional to the calculated heat load on the oven has been implemented in a control scheme based on heat balance zone by zone through a continuous baking oven, taking advantage of the high latent heat of evaporation of water. Tests on biscuit production ovens are reported, with results that support a claim that the scheme gives more reproducible water distribution in the final product than conventional closed loop control of zone ambient temperatures, thus enabling water content to be held more closely within tolerance.
Resumo:
Smart meters are becoming more ubiquitous as governments aim to reduce the risks to the energy supply as the world moves toward a low carbon economy. The data they provide could create a wealth of information to better understand customer behaviour. However at the household, and even the low voltage (LV) substation level, energy demand is extremely volatile, irregular and noisy compared to the demand at the high voltage (HV) substation level. Novel analytical methods will be required in order to optimise the use of household level data. In this paper we briefly outline some mathematical techniques which will play a key role in better understanding the customer's behaviour and create solutions for supporting the network at the LV substation level.
Resumo:
In order to calculate unbiased microphysical and radiative quantities in the presence of a cloud, it is necessary to know not only the mean water content but also the distribution of this water content. This article describes a study of the in-cloud horizontal inhomogeneity of ice water content, based on CloudSat data. In particular, by focusing on the relations with variables that are already available in general circulation models (GCMs), a parametrization of inhomogeneity that is suitable for inclusion in GCM simulations is developed. Inhomogeneity is defined in terms of the fractional standard deviation (FSD), which is given by the standard deviation divided by the mean. The FSD of ice water content is found to increase with the horizontal scale over which it is calculated and also with the thickness of the layer. The connection to cloud fraction is more complicated; for small cloud fractions FSD increases as cloud fraction increases while FSD decreases sharply for overcast scenes. The relations to horizontal scale, layer thickness and cloud fraction are parametrized in a relatively simple equation. The performance of this parametrization is tested on an independent set of CloudSat data. The parametrization is shown to be a significant improvement on the assumption of a single-valued global FSD
Resumo:
The subgrid-scale spatial variability in cloud water content can be described by a parameter f called the fractional standard deviation. This is equal to the standard deviation of the cloud water content divided by the mean. This parameter is an input to schemes that calculate the impact of subgrid-scale cloud inhomogeneity on gridbox-mean radiative fluxes and microphysical process rates. A new regime-dependent parametrization of the spatial variability of cloud water content is derived from CloudSat observations of ice clouds. In addition to the dependencies on horizontal and vertical resolution and cloud fraction included in previous parametrizations, the new parametrization includes an explicit dependence on cloud type. The new parametrization is then implemented in the Global Atmosphere 6 (GA6) configuration of the Met Office Unified Model and used to model the effects of subgrid variability of both ice and liquid water content on radiative fluxes and autoconversion and accretion rates in three 20-year atmosphere-only climate simulations. These simulations show the impact of the new regime-dependent parametrization on diagnostic radiation calculations, interactive radiation calculations and both interactive radiation calculations and in a new warm microphysics scheme. The control simulation uses a globally constant f value of 0.75 to model the effect of cloud water content variability on radiative fluxes. The use of the new regime-dependent parametrization in the model results in a global mean which is higher than the control's fixed value and a global distribution of f which is closer to CloudSat observations. When the new regime-dependent parametrization is used in radiative transfer calculations only, the magnitudes of short-wave and long-wave top of atmosphere cloud radiative forcing are reduced, increasing the existing global mean biases in the control. When also applied in a new warm microphysics scheme, the short-wave global mean bias is reduced.
Resumo:
Background Many biominerals form from amorphous calcium carbonate (ACC), but this phase is highly unstable when synthesised in its pure form inorganically. Several species of earthworm secrete calcium carbonate granules which contain highly stable ACC. We analysed the milky fluid from which granules form and solid granules for amino acid (by liquid chromatography) and functional group (by Fourier transform infrared (FTIR) spectroscopy) compositions. Granule elemental composition was determined using inductively coupled plasma-optical emission spectroscopy (ICP-OES) and electron microprobe analysis (EMPA). Mass of ACC present in solid granules was quantified using FTIR and compared to granule elemental and amino acid compositions. Bulk analysis of granules was of powdered bulk material. Spatially resolved analysis was of thin sections of granules using synchrotron-based μ-FTIR and EMPA electron microprobe analysis. Results The milky fluid from which granules form is amino acid-rich (≤ 136 ± 3 nmol mg−1 (n = 3; ± std dev) per individual amino acid); the CaCO3 phase present is ACC. Even four years after production, granules contain ACC. No correlation exists between mass of ACC present and granule elemental composition. Granule amino acid concentrations correlate well with ACC content (r ≥ 0.7, p ≤ 0.05) consistent with a role for amino acids (or the proteins they make up) in ACC stabilisation. Intra-granule variation in ACC (RSD = 16%) and amino acid concentration (RSD = 22–35%) was high for granules produced by the same earthworm. Maps of ACC distribution produced using synchrotron-based μ-FTIR mapping of granule thin sections and the relative intensity of the ν2: ν4 peak ratio, cluster analysis and component regression using ACC and calcite standards showed similar spatial distributions of likely ACC-rich and calcite-rich areas. We could not identify organic peaks in the μ-FTIR spectra and thus could not determine whether ACC-rich domains also had relatively high amino acid concentrations. No correlation exists between ACC distribution and elemental concentrations determined by EMPA. Conclusions ACC present in earthworm CaCO3 granules is highly stable. Our results suggest a role for amino acids (or proteins) in this stability. We see no evidence for stabilisation of ACC by incorporation of inorganic components.
Resumo:
We present ocean model sensitivity experiments aimed at separating the influence of the projected changes in the “thermal” (near-surface air temperature) and “wind” (near-surface winds) forcing on the patterns of sea level and ocean heat content. In the North Atlantic, the distribution of sea level change is more due to the “thermal” forcing, whereas it is more due to the “wind” forcing in the North Pacific; in the Southern Ocean, the “thermal” and “wind” forcing have a comparable influence. In the ocean adjacent to Antarctica the “thermal” forcing leads to an inflow of warmer waters on the continental shelves, which is somewhat attenuated by the “wind” forcing. The structure of the vertically integrated heat uptake is set by different processes at low and high latitudes: at low latitudes it is dominated by the heat transport convergence, whereas at high latitudes it represents a small residual of changes in the surface flux and advection of heat. The structure of the horizontally integrated heat content tendency is set by the increase of downward heat flux by the mean circulation and comparable decrease of upward heat flux by the subgrid-scale processes; the upward eddy heat flux decreases and increases by almost the same magnitude in response to, respectively, the “thermal” and “wind” forcing. Regionally, the surface heat loss and deep convection weaken in the Labrador Sea, but intensify in the Greenland Sea in the region of sea ice retreat. The enhanced heat flux anomaly in the subpolar Atlantic is mainly caused by the “thermal” forcing.
Resumo:
A discrete-time random process is described, which can generate bursty sequences of events. A Bernoulli process, where the probability of an event occurring at time t is given by a fixed probability x, is modified to include a memory effect where the event probability is increased proportionally to the number of events that occurred within a given amount of time preceding t. For small values of x the interevent time distribution follows a power law with exponent −2−x. We consider a dynamic network where each node forms, and breaks connections according to this process. The value of x for each node depends on the fitness distribution, \rho(x), from which it is drawn; we find exact solutions for the expectation of the degree distribution for a variety of possible fitness distributions, and for both cases where the memory effect either is, or is not present. This work can potentially lead to methods to uncover hidden fitness distributions from fast changing, temporal network data, such as online social communications and fMRI scans.
Resumo:
Monolayers of neurons and glia have been employed for decades as tools for the study of cellular physiology and as the basis for a variety of standard toxicological assays. A variety of three dimensional (3D) culture techniques have been developed with the aim to produce cultures that recapitulate desirable features of intact. In this study, we investigated the effect of preparing primary mouse mixed neuron and glial cultures in the inert 3D scaffold, Alvetex. Using planar multielectrode arrays, we compared the spontaneous bioelectrical activity exhibited by neuroglial networks grown in the scaffold with that seen in the same cells prepared as conventional monolayer cultures. Two dimensional (monolayer; 2D) cultures exhibited a significantly higher spike firing rate than that seen in 3D cultures although no difference was seen in total signal power (<50 Hz) while pharmacological responsiveness of each culture type to antagonism of GABAAR, NMDAR and AMPAR was highly comparable. Interestingly, correlation of burst events, spike firing and total signal power (<50 Hz) revealed that local field potential events were associated with action potential driven bursts as was the case for 2D cultures. Moreover, glial morphology was more physiologically normal in 3D cultures. These results show that 3D culture in inert scaffolds represents a more physiologically normal preparation which has advantages for physiological, pharmacological, toxicological and drug development studies, particularly given the extensive use of such preparations in high throughput and high content systems.