991 resultados para Stream processing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lactic acid (LA) has significant market potential for many industries including food, cosmetics, pharmaceuticals, medical and biodegradable materials. Production of LA usually begins with the fermentation of glucose but subsequent stages for the enrichment of lactic acid are complex and energy intensive and could be minimised using water selective membrane technology. In this work, we trialled a highly selective hydrostable carbonised template molecular sieve silica (CTMSS) membrane for the dehydration of a 15 vol% aqueous lactic acid solution with 0.1 vol% glucose. CTMSS membrane films were developed by dip-coating ceramic substrates with silica sols made using the acid catalysed sol-gel process. Permeation was performed by feeding LA/glucose solution to the membrane cell at 18°C in a standard pervaporation setup. The membrane showed selective transport of water from the aqueous feed to the permeate while glucose was not detected. CTMSS membrane permeate flux stabilised at 0.2 kg.m-2.hr-1 in 3.9 hours, and reduced LA to lower than 0.2 vol%. Flux through the CTMSS micropores was activated, displaying increased initial flux to 1.58 kg.m-2.hr-1 at 60°C. To enrich a 1 l.min-1 stream to 85% LA in a single stage, a minimum membrane area of 324 m2 would be required at 18°C. Increased operating temperature to 80°C significantly reduced this area to 24 m2 but LA levels in the permeate stream increased to 0.5 vol%. The highly selective CTMSS membrane technology is an ideal candidate for LA purification. CTMSS membrane systems operate stably in aqueous systems leading to potential cost reductions in LA processing for future markets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is now considerable evidence to suggest that non-demented people with Parkinson's disease (PD) experience difficulties using the morphosyntactic aspects of language. It remains unclear, however, at precisely which point in the processing of morphosyntax, these difficulties emerge. The major objective of the present study was to examine the impact of PD on the processes involved in accessing morphosyntactic information in the lexicon. Nineteen people with PD and 19 matched control subjects participated in the study which employed on-line word recognition tasks to examine morphosyntactic priming for local grammatical dependencies that occur both within (e.g. is going) and across (e.g. she gives) phrasal boundaries (Experiments 1 and 2, respectively). The control group evidenced robust morphosyntactic priming effects that were consistent with the involvement of both pre- (Experiment 1) and post-lexical (Experiment 2) processing routines. Whilst the participants with PD also recorded priming for dependencies within phrasal boundaries (Experiment 1), priming effects were observed over an abnormally brief time course. Further, in contrast to the controls, the PD group failed to record morphosyntactic priming for constructions that crossed phrasal boundaries (Experiment 2). The results demonstrate that attentionally mediated mechanisms operating at both the pre- and post-lexical stages of processing are able to contribute to morphosyntactic priming effects. In addition, the findings support the notion that, whilst people with PD are able to access morphosyntactic information in a normal manner, the time frame in which this information remains available for processing is altered. Deficits may also be experienced at the post-lexical integrational stage of processing.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article describes a method to turn astronomical imaging into a random number generator by using the positions of incident cosmic rays and hot pixels to generate bit streams. We subject the resultant bit streams to a battery of standard benchmark statistical tests for randomness and show that these bit streams are statistically the same as a perfect random bit stream. Strategies for improving and building upon this method are outlined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Coefficient of Variance (mean standard deviation/mean Response time) is a measure of response time variability that corrects for differences in mean Response time (RT) (Segalowitz & Segalowitz, 1993). A positive correlation between decreasing mean RTs and CVs (rCV-RT) has been proposed as an indicator of L2 automaticity and more generally as an index of processing efficiency. The current study evaluates this claim by examining lexical decision performance by individuals from three levels of English proficiency (Intermediate ESL, Advanced ESL and L1 controls) on stimuli from four levels of item familiarity, as defined by frequency of occurrence. A three-phase model of skill development defined by changing rCV-RT.values was tested. Results showed that RTs and CVs systematically decreased as a function of increasing proficiency and frequency levels, with the rCV-RT serving as a stable indicator of individual differences in lexical decision performance. The rCV-RT and automaticity/restructuring account is discussed in light of the findings. The CV is also evaluated as a more general quantitative index of processing efficiency in the L2.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This Toolkit was developed for the Australian dairy processing industry on behalf of Dairy Australia. At the conclusion of the project, industry participants gained exclusive access to a comprehensive Eco-Efficiency Manual, which outlined many of the opportunities available to the industry. Summary fact sheets were also prepared as publicly available resources and these are available for download below

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This manual has been developed to help the Australian dairy processing industry increase its competitiveness through increased awareness and uptake of eco-efficiency. The manual seeks to consolidate and build on existing knowledge, accumulated through projects and initiatives that the industry has previously undertaken to improve its use of raw materials and resources and reduce the generation of wastes. Where there is an existing comprehensive report or publication, the manual refers to this for further information. Eco-efficiency is about improving environmental performance to become more efficient and profitable. It is about producing more with less. It involves applying strategies that will not only ensure efficient use of resources and reduction in waste, but will also reduce costs. This chapter outlines the environmental challenges faced by Australian dairy processors. The manual explores opportunities for reducing environmental impacts in relation to water, energy, product yield, solid and liquid waste reduction and chemical use.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The cost of spatial join processing can be very high because of the large sizes of spatial objects and the computation-intensive spatial operations. While parallel processing seems a natural solution to this problem, it is not clear how spatial data can be partitioned for this purpose. Various spatial data partitioning methods are examined in this paper. A framework combining the data-partitioning techniques used by most parallel join algorithms in relational databases and the filter-and-refine strategy for spatial operation processing is proposed for parallel spatial join processing. Object duplication caused by multi-assignment in spatial data partitioning can result in extra CPU cost as well as extra communication cost. We find that the key to overcome this problem is to preserve spatial locality in task decomposition. We show in this paper that a near-optimal speedup can be achieved for parallel spatial join processing using our new algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Efficiency of presentation of a peptide epitope by a MHC class I molecule depends on two parameters: its binding to the MHC molecule and its generation by intracellular Ag processing. In contrast to the former parameter, the mechanisms underlying peptide selection in Ag processing are poorly understood. Peptide translocation by the TAP transporter is required for presentation of most epitopes and may modulate peptide supply to MHC class I molecules. To study the role of human TAP for peptide presentation by individual HLA class I molecules, we generated artificial neural networks capable of predicting the affinity of TAP for random sequence 9-mer peptides. Using neural network-based predictions of TAP affinity, we found that peptides eluted from three different HLA class I molecules had higher TAP affinities than control peptides with equal binding affinities for the same HLA class I molecules, suggesting that human TAP may contribute to epitope selection. In simulated TAP binding experiments with 408 HLA class I binding peptides, HLA class I molecules differed significantly with respect to TAP affinities of their ligands, As a result, some class I molecules, especially HLA-B27, may be particularly efficient in presentation of cytosolic peptides with low concentrations, while most class I molecules may predominantly present abundant cytosolic peptides.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gauging data are available from numerous streams throughout Australia, and these data provide a basis for historical analysis of geomorphic change in stream channels in response to both natural phenomena and human activities. We present a simple method for analysis of these data, and a briefcase study of an application to channel change in the Tully River, in the humid tropics of north Queensland. The analysis suggests that this channel has narrowed and deepened, rather than aggraded: channel aggradation was expected, given the intensification of land use in the catchment, upstream of the gauging station. Limitations of the method relate to the time periods over which stream gauging occurred; the spatial patterns of stream gauging sites; the quality and consistency of data collection; and the availability of concurrent land-use histories on which to base the interpretation of the channel changes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Inhibitors of proteolytic enzymes (proteases) are emerging as prospective treatments for diseases such as AIDS and viral infections, cancers, inflammatory disorders, and Alzheimer's disease. Generic approaches to the design of protease inhibitors are limited by the unpredictability of interactions between, and structural changes to, inhibitor and protease during binding. A computer analysis of superimposed crystal structures for 266 small molecule inhibitors bound to 48 proteases (16 aspartic, 17 serine, 8 cysteine, and 7 metallo) provides the first conclusive proof that inhibitors, including substrate analogues, commonly bind in an extended beta-strand conformation at the active sites of all these proteases. Representative superimposed structures are shown for (a) multiple inhibitors bound to a protease of each class, (b) single inhibitors each bound to multiple proteases, and (c) conformationally constrained inhibitors bound to proteases. Thus inhibitor/substrate conformation, rather than sequence/composition alone, influences protease recognition, and this has profound implications for inhibitor design. This conclusion is supported by NMR, CD, and binding studies for HIV-1 protease inhibitors/ substrates which, when preorganized in an extended conformation, have significantly higher protease affinity. Recognition is dependent upon conformational equilibria since helical and turn peptide conformations are not processed by proteases. Conformational selection explains the resistance of folded/structured regions of proteins to proteolytic degradation, the susceptibility of denatured proteins to processing, and the higher affinity of conformationally constrained 'extended' inhibitors/substrates for proteases. Other approaches to extended inhibitor conformations should similarly lead to high-affinity binding to a protease.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent research has begun to provide support for the assumptions that memories are stored as a composite and are accessed in parallel (Tehan & Humphreys, 1998). New predictions derived from these assumptions and from the Chappell and Humphreys (1994) implementation of these assumptions were tested. In three experiments, subjects studied relatively short lists of words. Some of the Lists contained two similar targets (thief and theft) or two dissimilar targets (thief and steal) associated with the same cue (ROBBERY). AS predicted, target similarity affected performance in cued recall but not free association. Contrary to predictions, two spaced presentations of a target did not improve performance in free association. Two additional experiments confirmed and extended this finding. Several alternative explanations for the target similarity effect, which incorporate assumptions about separate representations and sequential search, are rejected. The importance of the finding that, in at least one implicit memory paradigm, repetition does not improve performance is also discussed.