876 resultados para bigdata, data stream processing, dsp, apache storm, cyber security
Resumo:
The period, known to UK farmers and processors as the "spring flush", when the cows' diet changes from dry feed to spring pasture, has long been established as a time of change in milk properties and processing characteristics. Although it is believed to be a time when problems in processing are most likely to occur (e.g. milk that does not form clots or forms weak gels during cheesemaking), there is little evidence in the literature of detailed changes in milk composition and their impact on product manufacture. In this study, a range of physicochemical properties were analysed in milk collected from five commercial dairy herds before, during and after the spring flush period of 2006. In particular, total and ionic calcium contents of milk were studied in relation to other parameters including rennet clotting, acid gel properties, heat coagulation, alcohol stability, micelle size and zeta potential. Total divalent cations were significantly reduced from 35.4 to 33.4 mmol.L-1 during the study, while ionic calcium was reduced from 1.48 to 1.40 mmol.L-1 over the same period. Many parameters varied significantly between the sample dates. However, there was no evidence to suggest that any of the milk samples would have been unsuitable for processing - e.g. there were no samples that did not form clots with chymosin within a reasonable time or formed especially weak rennet or acid gels. A number of statistically significant correlations were found within the data, including ionic calcium concentration and pH; rennet clotting time (RCT) and micelle diameter; and RCT and ethanol stability. Overall, while there were clear variations in milk composition and properties over this period, there was no evidence to support the view that serious processing problems are likely during the change from dry feed to spring pasture.
Resumo:
The temperature-time profiles of 22 Australian industrial ultra-high-temperature (UHT) plants and 3 pilot plants, using both indirect and direct heating, were surveyed. From these data, the operating parameters of each plant, the chemical index C*, the bacteriological index B* and the predicted changes in the levels of beta-lactoglobulin, alpha-lactalbumin, lactulose, furosine and browning were determined using a simulation program based on published formulae and reaction kinetics data. There was a wide spread of heating conditions used, some of which resulted in a large margin of bacteriological safety and high chemical indices. However, no conditions were severe enough to cause browning during processing. The data showed a clear distinction between the indirect and direct heating plants. They also indicated that degree of denaturation of alpha-lactalbumin varied over a wide range and may be a useful discriminatory index of heat treatment. Application of the program to pilot plants illustrated its value in determining processing conditions in these plants to simulate the conditions in industrial UHT plants. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Consumers increasingly demand convenience foods of the highest quality in terms of natural flavor and taste, and which are freedom additives and preservatives. This demand has triggered the need for the development of a number of nonthermal approaches to food processing, of which high-pressure technology has proven to be very valuable. A number of recent publications have demonstrated novel and diverse uses of this technology. Its novel features, which include destruction of microorganisms at room temperature or lower, have made the technology commerically attractive. Enzymes forming bacteria can be by the application of pressure-thermal combinations. This review aims to identify the opportunities and challenges associated with this technology. In addition to discussing the effects of high pressure on food components, this review covers the combined effects of high pressure processing with: gamma irradiation, alternating current, ultrasound, and carbon dioxide or anti-microbial treatment. Further, the applications of this technology in various sectors-fruits and vegetables, dairy and meat processing-have been dealt with extensively. The integration of high-pressure with other matured processing operations such as blanching, dehydration, osmotic dehydration, rehyrdration, frying, freezing/thawing and solid-liquid extraction has been shown to open up new processing options. The key challenges identified include: heat transfer problems and resulting non-uniformity in processing, obtaining reliable and reproducible data, for process validation, lack of detailed knowledge about the interaction between high pressure, and a number of food constituents, packaging and statutory issues.
Resumo:
Acrylamide forms from free asparagine and reducing sugars during cooking, with asparagine concentration being the key parameter determining the formation in foods produced from wheat flour. In this study free amino acid concentrations were measured in the grain of varieties Spark and Rialto and four doubled haploid lines from a Spark x Rialto mapping population. The parental and doubled haploid lines had differing levels of total free amino acids and free asparagine in the grain, with one line consistently being lower than either parent for both of these factors. Sulfur deprivation led to huge increases in the concentrations of free asparagine and glutamine, and canonical variate analysis showed clear separation of the grain samples as a result of treatment (environment, E) and genotype (G) and provided evidence of G x E interactions. Low grain sulfur and high free asparagine concentration were closely associated with increased risk of acrylamide formation. G, E, and G x E effects were also evident in grain from six varieties of wheat grown at field locations around the United Kingdom in 2006 and 2007. The data indicate that progress in reducing the risk of acrylamide formation in processed wheat products could be made immediately through the selection and cultivation of low grain asparagme varieties and that further genetically driven improvements should be achievable. However, genotypes that are selected should also be tested under a range of environmental conditions.
Resumo:
Background: The computational grammatical complexity ( CGC) hypothesis claims that children with G(rammatical)-specific language impairment ( SLI) have a domain-specific deficit in the computational system affecting syntactic dependencies involving 'movement'. One type of such syntactic dependencies is filler-gap dependencies. In contrast, the Generalized Slowing Hypothesis claims that SLI children have a domain-general deficit affecting processing speed and capacity. Aims: To test contrasting accounts of SLI we investigate processing of syntactic (filler-gap) dependencies in wh-questions. Methods & Procedures: Fourteen 10; 2 - 17; 2 G-SLI children, 14 age- matched and 17 vocabulary-matched controls were studied using the cross- modal picturepriming paradigm. Outcomes & Results: G-SLI children's processing speed was significantly slower than the age controls, but not younger vocabulary controls. The G- SLI children and vocabulary controls did not differ on memory span. However, the typically developing and G-SLI children showed a qualitatively different processing pattern. The age and vocabulary controls showed priming at the gap, indicating that they process wh-questions through syntactic filler-gap dependencies. In contrast, G-SLI children showed priming only at the verb. Conclusions: The findings indicate that G-SLI children fail to establish reliably a syntactic filler- gap dependency and instead interpret wh-questions via lexical thematic information. These data challenge the Generalized Slowing Hypothesis account, but support the CGC hypothesis, according to which G-SLI children have a particular deficit in the computational system affecting syntactic dependencies involving 'movement'. As effective remediation often depends on aetiological insight, the discovery of the nature of the syntactic deficit, along side a possible compensatory use of semantics to facilitate sentence processing, can be used to direct therapy. However, the therapeutic strategy to be used, and whether such similar strengths and weaknesses within the language system are found in other SLI subgroups are empirical issues that warrant further research.
Resumo:
Random number generation (RNG) is a functionally complex process that is highly controlled and therefore dependent on Baddeley's central executive. This study addresses this issue by investigating whether key predictions from this framework are compatible with empirical data. In Experiment 1, the effect of increasing task demands by increasing the rate of the paced generation was comprehensively examined. As expected, faster rates affected performance negatively because central resources were increasingly depleted. Next, the effects of participants' exposure were manipulated in Experiment 2 by providing increasing amounts of practice on the task. There was no improvement over 10 practice trials, suggesting that the high level of strategic control required by the task was constant and not amenable to any automatization gain with repeated exposure. Together, the results demonstrate that RNG performance is a highly controlled and demanding process sensitive to additional demands on central resources (Experiment 1) and is unaffected by repeated performance or practice (Experiment 2). These features render the easily administered RNG task an ideal and robust index of executive function that is highly suitable for repeated clinical use.
Resumo:
The assumption that ignoring irrelevant sound in a serial recall situation is identical to ignoring a non-target channel in dichotic listening is challenged. Dichotic listening is open to moderating effects of working memory capacity (Conway et al., 2001) whereas irrelevant sound effects (ISE) are not (Beaman, 2004). A right ear processing bias is apparent in dichotic listening, whereas the bias is to the left ear in the ISE (Hadlington et al., 2004). Positron emission tomography (PET) imaging data (Scott et al., 2004, submitted) show bilateral activation of the superior temporal gyrus (STG) in the presence of intelligible, but ignored, background speech and right hemisphere activation of the STG in the presence of unintelligible background speech. It is suggested that the right STG may be involved in the ISE and a particularly strong left ear effect might occur because of the contralateral connections in audition. It is further suggested that left STG activity is associated with dichotic listening effects and may be influenced by working memory span capacity. The relationship of this functional and neuroanatomical model to known neural correlates of working memory is considered.
OFDM joint data detection and phase noise cancellation based on minimum mean square prediction error
Resumo:
This paper proposes a new iterative algorithm for orthogonal frequency division multiplexing (OFDM) joint data detection and phase noise (PHN) cancellation based on minimum mean square prediction error. We particularly highlight the relatively less studied problem of "overfitting" such that the iterative approach may converge to a trivial solution. Specifically, we apply a hard-decision procedure at every iterative step to overcome the overfitting. Moreover, compared with existing algorithms, a more accurate Pade approximation is used to represent the PHN, and finally a more robust and compact fast process based on Givens rotation is proposed to reduce the complexity to a practical level. Numerical Simulations are also given to verify the proposed algorithm. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This correspondence proposes a new algorithm for the OFDM joint data detection and phase noise (PHN) cancellation for constant modulus modulations. We highlight that it is important to address the overfitting problem since this is a major detrimental factor impairing the joint detection process. In order to attack the overfitting problem we propose an iterative approach based on minimum mean square prediction error (MMSPE) subject to the constraint that the estimated data symbols have constant power. The proposed constrained MMSPE algorithm (C-MMSPE) significantly improves the performance of existing approaches with little extra complexity being imposed. Simulation results are also given to verify the proposed algorithm.
Resumo:
The general packet radio service (GPRS) has been developed to allow packet data to be transported efficiently over an existing circuit-switched radio network, such as GSM. The main application of GPRS are in transporting Internet protocol (IP) datagrams from web servers (for telemetry or for mobile Internet browsers). Four GPRS baseband coding schemes are defined to offer a trade-off in requested data rates versus propagation channel conditions. However, data rates in the order of > 100 kbits/s are only achievable if the simplest coding scheme is used (CS-4) which offers little error detection and correction (EDC) (requiring excellent SNR) and the receiver hardware is capable of full duplex which is not currently available in the consumer market. A simple EDC scheme to improve the GPRS block error rate (BLER) performance is presented, particularly for CS-4, however gains in other coding schemes are seen. For every GPRS radio block that is corrected by the EDC scheme, the block does not need to be retransmitted releasing bandwidth in the channel and improving the user's application data rate. As GPRS requires intensive processing in the baseband, a viable field programmable gate array (FPGA) solution is presented in this paper.
Resumo:
The General Packet Radio Service (GPRS) was developed to allow packet data to be transported efficiently over an existing circuit switched radio network. The main applications for GPRS are in transporting IP datagram’s from the user’s mobile Internet browser to and from the Internet, or in telemetry equipment. A simple Error Detection and Correction (EDC) scheme to improve the GPRS Block Error Rate (BLER) performance is presented, particularly for coding scheme 4 (CS-4), however gains in other coding schemes are seen. For every GPRS radio block that is corrected by the EDC scheme, the block does not need to be retransmitted releasing bandwidth in the channel, improving throughput and the user’s application data rate. As GPRS requires intensive processing in the baseband, a viable hardware solution for a GPRS BLER co-processor is discussed that has been currently implemented in a Field Programmable Gate Array (FPGA) and presented in this paper.
Resumo:
The high variability of the intensity of suprathermal electron flux in the solar wind is usually ascribed to the high variability of sources on the Sun. Here we demonstrate that a substantial amount of the variability arises from peaks in stream interaction regions, where fast wind runs into slow wind and creates a pressure ridge at the interface. Superposed epoch analysis centered on stream interfaces in 26 interaction regions previously identified in Wind data reveal a twofold increase in 250 eV flux (integrated over pitch angle). Whether the peaks result from the compression there or are solar signatures of the coronal hole boundary, to which interfaces may map, is an open question. Suggestive of the latter, some cases show a displacement between the electron and magnetic field peaks at the interface. Since solar information is transmitted to 1 AU much more quickly by suprathermal electrons compared to convected plasma signatures, the displacement may imply a shift in the coronal hole boundary through transport of open magnetic flux via interchange reconnection. If so, however, the fact that displacements occur in both directions and that the electron and field peaks in the superposed epoch analysis are nearly coincident indicate that any systematic transport expected from differential solar rotation is overwhelmed by a random pattern, possibly owing to transport across a ragged coronal hole boundary.
Resumo:
Hydrologic transport of dissolved organic carbon (DOC) from peat soils may differ to organo-mineral soils in how they responded to changes in flow, because of differences in soil profile and hydrology. In well-drained organo-mineral soils, low flow is through the lower mineral layer where DOC is absorbed and high flow is through the upper organic layer where DOC is produced. DOC concentrations in streams draining organo-mineral soils typically increase with flow. In saturated peat soils, both high and low flows are through an organic layer where DOC is produced. Therefore, DOC in stream water draining peat may not increase in response to changes in flow as there is no switch in flow path between a mineral and organic layer. To verify this, we conducted a high-resolution monitoring study of soil and stream water at an upland peat catchment in northern England. Our data showed a strong positive correlation between DOC concentrations at − 1 and − 5 cm depth and stream water, and weaker correlations between concentrations at − 20 to − 50 cm depth and stream water. Although near surface organic material appears to be the key source of stream water DOC in both peat and organo-mineral soils, we observed a negative correlation between stream flow and DOC concentrations instead of a positive correlation as DOC released from organic layers during low and high flow was diluted by rainfall. The differences in DOC transport processes between peat and organo-mineral soils have different implications for our understanding of long-term changes in DOC exports. While increased rainfall may cause an increase in DOC flux from peat due to an increase in water volume, it may cause a decrease in concentrations. This response is contrary to expected changes in DOC exports from organo-mineral soils, where increase rainfall is likely to result in an increase in flux and concentration.
Resumo:
Most of the dissolved organic carbon (DOC) exported from catchments is transported during storm events. Accurate assessments of DOC fluxes are essential to understand long-term trends in the transport of DOC from terrestrial to aquatic systems, and also the loss of carbon from peatlands to determine changes in the source/sink status of peatland carbon stores. However, many long-term monitoring programmes collect water samples at a frequency (e.g. weekly/monthly) less than the time period of a typical storm event (typically <1–2 days). As widespread observations in catchments dominated by organo-mineral soils have shown that both concentration and flux of DOC increases during storm events, lower frequency monitoring could result in substantial underestimation of DOC flux as the most dynamic periods of transport are missed. However, our intensive monitoring study in a UK upland peatland catchment showed a contrasting response to these previous studies. Our results showed that (i) DOC concentrations decreased during autumn storm events and showed a poor relationship with flow during other seasons; and that (ii) this decrease in concentrations during autumn storms caused DOC flux estimates based on weekly monitoring data to be over-estimated, rather than under-estimated, because of over rather than under estimation of the flow-weighted mean concentration used in flux calculations. However, as DOC flux is ultimately controlled by discharge volume, and therefore rainfall, and the magnitude of change in discharge was greater than the magnitude of decline in concentrations, DOC flux increased during individual storm events. The implications for long-term DOC trends are therefore contradictory, as increased rainfall could increase flux but cause an overall decrease in DOC concentrations from peatland streams. Care needs to be taken when interpreting long-term trends in DOC flux rather than concentration; as flux is calculated from discharge estimates, and discharge is controlled by rainfall, DOC flux and rainfall/discharge will always be well correlated.
Resumo:
Background The information processing capacity of the human mind is limited, as is evidenced by the attentional blink (AB) - a deficit in identifying the second of two temporally-close targets (T1 and T2) embedded in a rapid stream of distracters. Theories of the AB generally agree that it results from competition between stimuli for conscious representation. However, they disagree in the specific mechanisms, in particular about how attentional processing of T1 determines the AB to T2. Methodology/Principal Findings The present study used the high spatial resolution of functional magnetic resonance imaging (fMRI) to examine the neural mechanisms underlying the AB. Our research approach was to design T1 and T2 stimuli that activate distinguishable brain areas involved in visual categorization and representation. ROI and functional connectivity analyses were then used to examine how attentional processing of T1, as indexed by activity in the T1 representation area, affected T2 processing. Our main finding was that attentional processing of T1 at the level of the visual cortex predicted T2 detection rates Those individuals who activated the T1 encoding area more strongly in blink versus no-blink trials generally detected T2 on a lower percentage of trials. The coupling of activity between T1 and T2 representation areas did not vary as a function of conscious T2 perception. Conclusions/Significance These data are consistent with the notion that the AB is related to attentional demands of T1 for selection, and indicate that these demands are reflected at the level of visual cortex. They also highlight the importance of individual differences in attentional settings in explaining AB task performance.