26 resultados para K most critical paths
em CentAUR: Central Archive University of Reading - UK
Resumo:
BACKGROUND: Chemical chitin extraction generates large amounts of wastes and increases partial deacetylation of the product. Therefore, the use of biological methods for chitin extraction is an interesting alternative. The effects of process conditions on enzyme assisted extraction of chitin from the shrimp shells in a systematic way were the focal points of this study. RESULTS: Demineralisation conditions of 25C, 20 min, shells-lactic acid ratio of 1:1.1 w/w; and shells-acetic acid ratio of 1:1.2 w/w, the maximum demineralisation values were 98.64 and 97.57% for lactic and acetic acids, respectively. A total protein removal efficiency of 91.10% by protease from Streptomyces griseus with enzyme-substrate ratio 55 U/g, pH 7.0 and incubation time 3 h is obtained when the particle size range is 50-25 μm, which was identified as the most critical factor. The X-ray diffraction and 13C NMR spectroscopy analysis showed that the lower percent crystallinity and higher degree of acetylation of chitin from enzyme assisted extraction may exhibit better solubility properties and less depolymerisation in comparison with chitin from the chemical extraction. CONCLUSION: The present work investigates the effects of individual factors on process yields, and it has shown that, if the particle size is properly controlled a reaction time of 3 h is more than enough for deproteination by protease. Physicochemical analysis indicated that the enzyme assisted production of chitin seems appropriate to extract chitin, possibly retaining its native structure.
Resumo:
This paper develops cycle-level FPGA circuits of an organization for a fast path-based neural branch predictor Our results suggest that practical sizes of prediction tables are limited to around 32 KB to 64 KB in current FPGA technology due mainly to FPGA area of logic resources to maintain the tables. However the predictor scales well in terms of prediction speed. Table sizes alone should not be used as the only metric for hardware budget when comparing neural-based predictor to predictors of totally different organizations. This paper also gives early evidence to shift the attention on to the recovery from mis-prediction latency rather than on prediction latency as the most critical factor impacting accuracy of predictions for this class of branch predictors.
Resumo:
From birth onwards, the gastrointestinal (GI) tract of infants progressively acquires a complex range of micro-organisms. It is thought that by 2 years of age the GI microbial population has stabilized. Within the developmental period of the infant GI microbiota, weaning is considered to be most critical, as the infant switches from a milk-based diet (breast and/or formula) to a variety of food components. Longitudinal analysis of the biological succession of the infant GI/faecal microbiota is lacking. In this study, faecal samples were obtained regularly from 14 infants from 1 month to 18 months of age. Seven of the infants (including a set of twins) were exclusively breast-fed and seven were exclusively formula-fed prior to weaning, with 175 and 154 faecal samples, respectively, obtained from each group. Diversity and dynamics of the infant faecal microbiota were analysed by using fluorescence in situ hybridization and denaturing gradient gel electrophoresis. Overall, the data demonstrated large inter- and intra-individual differences in the faecal microbiological profiles during the study period. However, the infant faecal microbiota merged with time towards a climax community within and between feeding groups. Data from the twins showed the highest degree of similarity both quantitatively and qualitatively. Inter-individual variation was evident within the infant faecal microbiota and its development, even within exclusively formula-fed infants receiving the same diet. These data can be of help to future clinical trials (e.g. targeted weaning products) to organize protocols and obtain a more accurate outline of the changes and dynamics of the infant GI microbiota.
Resumo:
The IntFOLD-TS method was developed according to the guiding principle that the model quality assessment would be the most critical stage for our template based modelling pipeline. Thus, the IntFOLD-TS method firstly generates numerous alternative models, using in-house versions of several different sequence-structure alignment methods, which are then ranked in terms of global quality using our top performing quality assessment method – ModFOLDclust2. In addition to the predicted global quality scores, the predictions of local errors are also provided in the resulting coordinate files, using scores that represent the predicted deviation of each residue in the model from the equivalent residue in the native structure. The IntFOLD-TS method was found to generate high quality 3D models for many of the CASP9 targets, whilst also providing highly accurate predictions of their per-residue errors. This important information may help to make the 3D models that are produced by the IntFOLD-TS method more useful for guiding future experimental work
Resumo:
The distribution of nutrients and assimilates in different organs and tissues is in a constant state of flux throughout the growth and development of a plant. At key stages during the life cycle profound changes occur, and perhaps one of the most critical of these is during seed filling. By restricting the competition for reserves in Arabidopsis plants, the ability to manipulate seed size, seed weight, or seed content has been explored. Removal of secondary inflorescences and lateral branches resulted in a stimulation of elongation of the primary inflorescence and an increase in the distance between siliques. The pruning treatment also led to the development of longer and larger siliques that contained fewer, bigger seeds. This seems to be a consequence of a reduction in the number of ovules that develop and an increase in the fatty acid content of the seeds that mature. The data show that shoot architecture could have a substantial impact on the partitioning of reserves between vegetative and reproductive tissues and could be an important trait for selection in rapid phenotyping screens to optimize crop performance.
Resumo:
During the last twenty years, consumer choice in high income countries is no longer merely dictated by price and the organoleptic characteristics of a product, but also by other features some of which are not patently tangible. The growing importance of such attributes in the process of consumer choice is not only due to income increase, but also to changes in lifestyle such as migrations from the countryside, a generalized urbanization and consequential city life style, female emancipation and work outside the domestic walls for women, the drastic decrease in hard physical labor and the process of internationalization. The present survey study aims to explore the importance that Italian consumers give to fresh cut buying attributes and which of these attributes should be taken into consideration by industries in order to satisfy the needs of the most critical shoppers. Where possible, market and survey data for fresh cut products will be compared with those for cooked products and before presenting the results and conclusions of the study, the technical issues of processing will be highlighted owing to the fact that they affect the marketing of these products, the recent market situation with regard to consumption will be illustrated and the methodology used will be described.
Resumo:
The oxidation of organic films on cloud condensation nuclei has the potential to affect climate and precipitation events. In this work we present a study of the oxidation of a monolayer of deuterated oleic acid (cis-9-octadecenoic acid) at the air-water interface by ozone to determine if oxidation removes the organic film or replaces it with a product film. A range of different aqueous sub-phases were studied. The surface excess of deuterated material was followed by neutron reflection whilst the surface pressure was followed using a Wilhelmy plate. The neutron reflection data reveal that approximately half the organic material remains at the air-water interface following the oxidation of oleic acid by ozone, thus cleavage of the double bond by ozone creates one surface active species and one species that partitions to the bulk (or gas) phase. The most probable products, produced with a yield of similar to(87 +/- 14)%, are nonanoic acid, which remains at the interface, and azelaic acid (nonanedioic acid), which dissolves into the bulk solution. We also report a surface bimolecular rate constant for the reaction between ozone and oleic acid of (7.3 +/- 0.9) x 10(-11) cm(2) molecule s(-1). The rate constant and product yield are not affected by the solution sub-phase. An uptake coefficient of ozone on the oleic acid monolayer of similar to 4 x 10(-6) is estimated from our results. A simple Kohler analysis demonstrates that the oxidation of oleic acid by ozone on an atmospheric aerosol will lower the critical supersaturation needed for cloud droplet formation. We calculate an atmospheric chemical lifetime of oleic acid of 1.3 hours, significantly longer than laboratory studies on pure oleic acid particles suggest, but more consistent with field studies reporting oleic acid present in aged atmospheric aerosol.
Resumo:
The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. This work proposes a fully decentralised algorithm (Epidemic K-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art distributed K-Means algorithms based on sampling methods. The experimental analysis confirms that the proposed algorithm is a practical and accurate distributed K-Means implementation for networked systems of very large and extreme scale.
Resumo:
The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks, such as massively parallel processors and clusters of workstations. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. The lack of scalable and fault tolerant global communication and synchronisation methods in large-scale systems has hindered the adoption of the K-Means algorithm for applications in large networked systems such as wireless sensor networks, peer-to-peer systems and mobile ad hoc networks. This work proposes a fully distributed K-Means algorithm (EpidemicK-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art sampling methods and shows that the proposed method overcomes the limitations of the sampling-based approaches for skewed clusters distributions. The experimental analysis confirms that the proposed algorithm is very accurate and fault tolerant under unreliable network conditions (message loss and node failures) and is suitable for asynchronous networks of very large and extreme scale.
Resumo:
Objectives. While older adults often display memory deficits, with practice they can sometimes selectively remember valuable information at the expense of less value information. We examined age-related differences and similarities in memory for health-related information under conditions where some information was critical to remember. Method. In Experiment 1, participants studied three lists of allergens, ranging in severity from 0 (not a health risk) to 10 (potentially fatal), with the instruction that it was particularly important to remember items to which a fictional relative was most severely allergic. After each list, participants received feedback regarding their recall of the high-value allergens. Experiment 2 examined memory for health benefits, presenting foods that were potentially beneficial to the relative’s immune system. Results. While younger adults exhibited better overall memory for the allergens, both age groups in Experiment 1 developed improved selectivity across the lists, with no evident age differences in severe allergen recall by List 2. Selectivity also developed in Experiment 2, although age differences for items of high health benefit were present. Discussion. The results have implications for models of selective memory in older age, and for how aging influences the ability to strategically remember important information within health-related contexts.
Resumo:
This article describes the development and evaluation of the U.K.’s new High-Resolution Global Environmental Model (HiGEM), which is based on the latest climate configuration of the Met Office Unified Model, known as the Hadley Centre Global Environmental Model, version 1 (HadGEM1). In HiGEM, the horizontal resolution has been increased to 0.83° latitude × 1.25° longitude for the atmosphere, and 1/3° × 1/3° globally for the ocean. Multidecadal integrations of HiGEM, and the lower-resolution HadGEM, are used to explore the impact of resolution on the fidelity of climate simulations. Generally, SST errors are reduced in HiGEM. Cold SST errors associated with the path of the North Atlantic drift improve, and warm SST errors are reduced in upwelling stratocumulus regions where the simulation of low-level cloud is better at higher resolution. The ocean model in HiGEM allows ocean eddies to be partially resolved, which dramatically improves the representation of sea surface height variability. In the Southern Ocean, most of the heat transports in HiGEM is achieved by resolved eddy motions, which replaces the parameterized eddy heat transport in the lower-resolution model. HiGEM is also able to more realistically simulate small-scale features in the wind stress curl around islands and oceanic SST fronts, which may have implications for oceanic upwelling and ocean biology. Higher resolution in both the atmosphere and the ocean allows coupling to occur on small spatial scales. In particular, the small-scale interaction recently seen in satellite imagery between the atmosphere and tropical instability waves in the tropical Pacific Ocean is realistically captured in HiGEM. Tropical instability waves play a role in improving the simulation of the mean state of the tropical Pacific, which has important implications for climate variability. In particular, all aspects of the simulation of ENSO (spatial patterns, the time scales at which ENSO occurs, and global teleconnections) are much improved in HiGEM.
Resumo:
Critical loads are the basis for policies controlling emissions of acidic substances in Europe. The implementation of these policies involves large expenditures, and it is reasonable for policymakers to ask what degree of certainty can be attached to the underlying critical load and exceedance estimates. This paper is a literature review of studies which attempt to estimate the uncertainty attached to critical loads. Critical load models and uncertainty analysis are briefly outlined. Most studies have used Monte Carlo analysis of some form to investigate the propagation of uncertainties in the definition of the input parameters through to uncertainties in critical loads. Though the input parameters are often poorly known, the critical load uncertainties are typically surprisingly small because of a "compensation of errors" mechanism. These results depend on the quality of the uncertainty estimates of the input parameters, and a "pedigree" classification for these is proposed. Sensitivity analysis shows that some input parameters are more important in influencing critical load uncertainty than others, but there have not been enough studies to form a general picture. Methods used for dealing with spatial variation are briefly discussed. Application of alternative models to the same site or modifications of existing models can lead to widely differing critical loads, indicating that research into the underlying science needs to continue.
Resumo:
This paper reports an uncertainty analysis of critical loads for acid deposition for a site in southern England, using the Steady State Mass Balance Model. The uncertainty bounds, distribution type and correlation structure for each of the 18 input parameters was considered explicitly, and overall uncertainty estimated by Monte Carlo methods. Estimates of deposition uncertainty were made from measured data and an atmospheric dispersion model, and hence the uncertainty in exceedance could also be calculated. The uncertainties of the calculated critical loads were generally much lower than those of the input parameters due to a "compensation of errors" mechanism - coefficients of variation ranged from 13% for CLmaxN to 37% for CL(A). With 1990 deposition, the probability that the critical load was exceeded was > 0.99; to reduce this probability to 0.50, a 63% reduction in deposition is required; to 0.05, an 82% reduction. With 1997 deposition, which was lower than that in 1990, exceedance probabilities declined and uncertainties in exceedance narrowed as deposition uncertainty had less effect. The parameters contributing most to the uncertainty in critical loads were weathering rates, base cation uptake rates, and choice of critical chemical value, indicating possible research priorities. However, the different critical load parameters were to some extent sensitive to different input parameters. The application of such probabilistic results to environmental regulation is discussed.