921 resultados para SPREADABLE PROCESSED CHEESE
Resumo:
We analyse the security of iterated hash functions that compute an input dependent checksum which is processed as part of the hash computation. We show that a large class of such schemes, including those using non-linear or even one-way checksum functions, is not secure against the second preimage attack of Kelsey and Schneier, the herding attack of Kelsey and Kohno and the multicollision attack of Joux. Our attacks also apply to a large class of cascaded hash functions. Our second preimage attacks on the cascaded hash functions improve the results of Joux presented at Crypto’04. We also apply our attacks to the MD2 and GOST hash functions. Our second preimage attacks on the MD2 and GOST hash functions improve the previous best known short-cut second preimage attacks on these hash functions by factors of at least 226 and 254, respectively. Our herding and multicollision attacks on the hash functions based on generic checksum functions (e.g., one-way) are a special case of the attacks on the cascaded iterated hash functions previously analysed by Dunkelman and Preneel and are not better than their attacks. On hash functions with easily invertible checksums, our multicollision and herding attacks (if the hash value is short as in MD2) are more efficient than those of Dunkelman and Preneel.
Resumo:
We propose a new way to build a combined list from K base lists, each containing N items. A combined list consists of top segments of various sizes from each base list so that the total size of all top segments equals N. A sequence of item requests is processed and the goal is to minimize the total number of misses. That is, we seek to build a combined list that contains all the frequently requested items. We first consider the special case of disjoint base lists. There, we design an efficient algorithm that computes the best combined list for a given sequence of requests. In addition, we develop a randomized online algorithm whose expected number of misses is close to that of the best combined list chosen in hindsight. We prove lower bounds that show that the expected number of misses of our randomized algorithm is close to the optimum. In the presence of duplicate items, we show that computing the best combined list is NP-hard. We show that our algorithms still apply to a linearized notion of loss in this case. We expect that this new way of aggregating lists will find many ranking applications.
Resumo:
During Pavlovian auditory fear conditioning a previously neutral auditory stimulus (CS) gains emotional significance through pairing with a noxious unconditioned stimulus (US). These associations are believed to be formed by way of plasticity at auditory input synapses on principal neurons in the lateral nucleus of the amygdala (LA). In order to begin to understand how fear memories are stored and processed by synaptic changes in the LA, we have quantified both the entire neural number and the sub-cellular structure of LA principal neurons.We first used stereological cell counting methods on Gimsa or GABA immunostained rat brain. We identified 60,322+/-1408 neurons in the LA unilaterally (n=7). Of these 16,917+/-471 were GABA positive. The intercalated nuclei were excluded from the counts and thus GABA cells are believed to represent GABAergic interneurons. The sub-nuclei of the LA were also independently counted. We then quantified the morphometric properties of in vitro electrophysiologically identified principal neurons of the LA, corrected for shrinkage in xyz planes. The total dendritic length was 9.97+/-2.57mm, with 21+/-4 nodes (n=6). Dendritic spine density was 0.19+/-0.03 spines/um (n=6). Intra-LA axon collaterals had a bouton density of 0.1+/-0.02 boutons/um (n=5). These data begin to reveal the finite cellular and sub-cellular processing capacity of the lateral amygdala, and should facilitate efforts to understand mechanisms of plasticity in LA.
Resumo:
Meat/meat alternatives (M/MA) are key sources of Fe, Zn and protein, but intake tends to be low in young children. Australian recommendations state that Fe-rich foods, including M/MA, should be the first complementary foods offered to infants. The present paper reports M/MA consumption of Australian infants and toddlers, compares intake with guidelines, and suggests strategies to enhance adherence to those guidelines. Mother–infant dyads recruited as part of the NOURISH and South Australian Infants Dietary Intake studies provided 3 d of intake data at three time points: Time 1 (T1) (n 482, mean age 5·5 (SD 1·1) months), Time 2 (T2) (n 600, mean age 14·0 (SD 1·2) months) and Time 3 (T3) (n 533, mean age 24 (SD 0·7) months). Of 170 infants consuming solids and aged greater than 6 months at T1, 50 (29 %) consumed beef, lamb, veal (BLV) or pork on at least one of 3 d. Commercial infant foods containing BLV or poultry were the most common form of M/MA consumed at T1, whilst by T2 BLV mixed dishes (including pasta bolognaise) became more popular and remained so at T3. The processed M/MA increased in popularity over time, led by pork (including ham). The present study shows that M/MA are not being eaten by Australian infants or toddlers regularly enough; or in adequate quantities to meet recommendations; and that the form in which these foods are eaten can lead to smaller M/MA serve sizes and greater Na intake. Parents should be encouraged to offer M/MA in a recognisable form, as one of the first complementary foods, in order to increase acceptance at a later age.
Resumo:
Background Procedural sedation and analgesia (PSA) is used to attenuate the pain and distress that may otherwise be experienced during diagnostic and interventional medical or dental procedures. As the risk of adverse events increases with the depth of sedation induced, frequent monitoring of level of consciousness is recommended. Level of consciousness is usually monitored during PSA with clinical observation. Processed electroencephalogram-based depth of anaesthesia (DoA) monitoring devices provide an alternative method to monitor level of consciousness that can be used in addition to clinical observation. However, there is uncertainty as to whether their routine use in PSA would be justified. Rigorous evaluation of the clinical benefits of DoA monitors during PSA, including comprehensive syntheses of the available evidence, is therefore required. One potential clinical benefit of using DoA monitoring during PSA is that the technology could improve patient safety by reducing sedation-related adverse events, such as death or permanent neurological disability. We hypothesise that earlier identification of lapses into deeper than intended levels of sedation using DoA monitoring leads to more effective titration of sedative and analgesic medications, and results in a reduction in the risk of adverse events caused by the consequences of over-sedation, such as hypoxaemia. The primary objective of this review is to determine whether using DoA monitoring during PSA in the hospital setting improves patient safety by reducing the risk of hypoxaemia (defined as an arterial partial pressure of oxygen below 60 mmHg or percentage of haemoglobin that is saturated with oxygen [SpO2] less than 90 %). Other potential clinical benefits of using DoA monitoring devices during sedation will be assessed as secondary outcomes. Methods/design Electronic databases will be systematically searched for randomized controlled trials comparing the use of depth of anaesthesia monitoring devices with clinical observation of level of consciousness during PSA. Language restrictions will not be imposed. Screening, study selection and data extraction will be performed by two independent reviewers. Disagreements will be resolved by discussion. Meta-analyses will be performed if suitable. Discussion This review will synthesise the evidence on an important potential clinical benefit of DoA monitoring during PSA within hospital settings.
Resumo:
The size and arrangement of stromal collagen fibrils (CFs) influence the optical properties of the cornea and hence its function. The spatial arrangement of the collagen is still questionable in relation to the diameter of collagen fibril. In the present study, we introduce a new parameter, edge-fibrillar distance (EFD) to measure how two collagen fibrils are spaced with respect to their closest edges and their spatial distribution through normalized standard deviation of EFD (NSDEFD) accessed through the application of two commercially available multipurpose solutions (MPS): ReNu and Hippia. The corneal buttons were soaked separately in ReNu and Hippia MPS for five hours, fixed overnight in 2.5% glutaraldehyde containing cuprolinic blue and processed for transmission electron microscopy. The electron micrographs were processed using ImageJ user-coded plugin. Statistical analysis was performed to compare the image processed equivalent diameter (ED), inter-fibrillar distance (IFD), and EFD of the CFs of treated versus normal corneas. The ReNu-soaked cornea resulted in partly degenerated epithelium with loose hemidesmosomes and Bowman’s collagen. In contrast, the epithelium of the cornea soaked in Hippia was degenerated or lost but showed closely packed Bowman’s collagen. Soaking the corneas in both MPS caused a statistically significant decrease in the anterior collagen fibril, ED and a significant change in IFD, and EFD than those of the untreated corneas (p < 0.05, for all comparisons). The introduction of EFD measurement in the study directly provided a sense of gap between periphery of the collagen bundles, their spatial distribution; and in combination with ED, they showed how the corneal collagen bundles are spaced in relation to their diameters. The spatial distribution parameter NSDEFD indicated that ReNu treated cornea fibrils were uniformly distributed spatially, followed by normal and Hippia. The EFD measurement with relatively lower standard deviation and NSDEFD, a characteristic of uniform CFs distribution, can be an additional parameter used in evaluating collagen organization and accessing the effects of various treatments on corneal health and transparency.
Resumo:
Light-emitting field effect transistors (LEFETs) are an emerging class of multifunctional optoelectronic devices. It combines the light emitting function of an OLED with the switching function of a transistor in a single device architecture the dual functionality of LEFETs has the potential applications in active matrix displays. However, the key problem of existing LEFETs thus far has been their low EQEs at high brightness, poor ON/OFF and poorly defined light emitting area-a thin emissive zone at the edge of the electrodes. Here we report heterostructure LEFETs based on solution processed unipolar charge transport and an emissive polymer that have an EQE of up to 1% at a brightness of 1350a €...cd/m 2, ON/OFF ratio > 10 4 and a well-defined light emitting zone suitable for display pixel design. We show that a non-planar hole-injecting electrode combined with a semi-transparent electron-injecting electrode enables to achieve high EQE at high brightness and high ON/OFF ratio. Furthermore, we demonstrate that heterostructure LEFETs have a better frequency response (f cut-off = 2.6a €...kHz) compared to single layer LEFETs the results presented here therefore are a major step along the pathway towards the realization of LEFETs for display applications.
Resumo:
Bottom emitting organic light emitting diodes (OLEDs) can suffer from lower external quantum efficiencies (EQE) due to inefficient out-coupling of the generated light. Herein, it is demonstrated that the current efficiency and EQE of red, yellow, and blue fluorescent single layer polymer OLEDs is significantly enhanced when a MoOx(5 nm)/Ag(10 nm)/MoOx(40 nm) stack is used as the transparent anode in a top emitting OLED structure. A maximum current efficiency and EQE of 21.2 cd/A and 6.7%, respectively, was achieved for a yellow OLED, while a blue OLED achieved a maximum of 16.5 cd/A and 10.1%, respectively. The increase in light out-coupling from the top-emitting OLEDs led to increase in efficiency by a factor of up to 2.2 relative to the optimised bottom emitting devices, which is the best out-coupling reported using solution processed polymers in a simple architecture and a significant step forward for their use in large area lighting and displays.
Resumo:
We have prepared p-n junction organic photovoltaic cells using an all solution processing method with poly(3-hexylthiophene) (P3HT) as the donor and phenyl-C 61-butyric acid methyl ester (PCBM) as the acceptor. Interdigitated donor/acceptor interface morphology was observed in the device processed with the lowest boiling point solvent for PCBM used in this study. The influences of different solvents on donor/acceptor morphology and respective device performance were investigated simultaneously. The best device obtained had characteristically rough interface morphology with a peak to valley value ∼15 nm. The device displayed a power conversion efficiency of 1.78%, an open circuit voltage (V oc) 0.44 V, a short circuit current density (J sc) 9.4 mA/cm 2 and a fill factor 43%.
Resumo:
The possibility to selectively modulate the charge carrier transport in semiconducting materials is extremely challenging for the development of high performance and low-power consuming logic circuits. Systematical control over the polarity (electrons and holes) in transistor based on solution processed layer by layer polymer/graphene oxide hybrid system has been demonstrated. The conversion degree of the polarity is well controlled and reversible by trapping the opposite carriers. Basically, an electron device is switched to be a hole only device or vice versa. Finally, a hybrid layer ambipolar inverter is demonstrated in which almost no leakage of opposite carrier is found. This hybrid material has wide range of applications in planar p-n junctions and logic circuits for high-throughput manufacturing of printed electronic circuits.
Resumo:
Contemporary food systems promote the consumption of highly processed foods of limited nutrition, contributing to overweight and obesity, diet-related disease and significant financial burden on healthcare systems. In part, this has resulted from highly successful design, development and marketing strategies for processed foods. The successful application of such strategies to healthy food options, and the services and business plans that accompany them, could assist in enhancing health and alleviating burden on health care systems. Product designers have long been aware of the importance of intertwining emotional experiences with new products. However, a lack of theoretical precision exists for applying emotional design beyond food products, to the food systems, services and business models that drive them. This article explores emotional design within the context of food and food systems and proposes a new concept – Emotional Food Design (EFD), through which emotional design is integrated across levels of a food system. EFD complements the dominating deductive view of food systems research with an abductive iterative design approach contextualized within the creation of new food products, services and business models and their associated emotional attachments. This paper concludes by outlining what EFD can offer to reorient food systems to successfully promote healthy eating.
Resumo:
This review is focused on the impact of chemometrics for resolving data sets collected from investigations of the interactions of small molecules with biopolymers. These samples have been analyzed with various instrumental techniques, such as fluorescence, ultraviolet–visible spectroscopy, and voltammetry. The impact of two powerful and demonstrably useful multivariate methods for resolution of complex data—multivariate curve resolution–alternating least squares (MCR–ALS) and parallel factor analysis (PARAFAC)—is highlighted through analysis of applications involving the interactions of small molecules with the biopolymers, serum albumin, and deoxyribonucleic acid. The outcomes illustrated that significant information extracted by the chemometric methods was unattainable by simple, univariate data analysis. In addition, although the techniques used to collect data were confined to ultraviolet–visible spectroscopy, fluorescence spectroscopy, circular dichroism, and voltammetry, data profiles produced by other techniques may also be processed. Topics considered including binding sites and modes, cooperative and competitive small molecule binding, kinetics, and thermodynamics of ligand binding, and the folding and unfolding of biopolymers. Applications of the MCR–ALS and PARAFAC methods reviewed were primarily published between 2008 and 2013.
Resumo:
Background Nicotiana benthamiana is an allo-tetraploid plant, which can be challenging for de novo transcriptome assemblies due to homeologous and duplicated gene copies. Transcripts generated from such genes can be distinct yet highly similar in sequence, with markedly differing expression levels. This can lead to unassembled, partially assembled or mis-assembled contigs. Due to the different properties of de novo assemblers, no one assembler with any one given parameter space can re-assemble all possible transcripts from a transcriptome. Results In an effort to maximise the diversity and completeness of de novo assembled transcripts, we utilised four de novo transcriptome assemblers, TransAbyss, Trinity, SOAPdenovo-Trans, and Oases, using a range of k-mer sizes and different input RNA-seq read counts. We complemented the parameter space biologically by using RNA from 10 plant tissues. We then combined the output of all assemblies into a large super-set of sequences. Using a method from the EvidentialGene pipeline, the combined assembly was reduced from 9.9 million de novo assembled transcripts to about 235,000 of which about 50,000 were classified as primary. Metrics such as average bit-scores, feature response curves and the ability to distinguish paralogous or homeologous transcripts, indicated that the EvidentialGene processed assembly was of high quality. Of 35 RNA silencing gene transcripts, 34 were identified as assembled to full length, whereas in a previous assembly using only one assembler, 9 of these were partially assembled. Conclusions To achieve a high quality transcriptome, it is advantageous to implement and combine the output from as many different de novo assemblers as possible. We have in essence taking the ‘best’ output from each assembler while minimising sequence redundancy. We have also shown that simultaneous assessment of a variety of metrics, not just focused on contig length, is necessary to gauge the quality of assemblies.
Resumo:
Background The growing awareness of transfusion-associated morbidity and mortality necessitates investigations into the underlying mechanisms. Small animals have been the dominant transfusion model but have associated limitations. This study aimed to develop a comprehensive large animal (ovine) model of transfusion encompassing: blood collection, processing and storage, compatibility testing right through to post-transfusion outcomes. Materials and methods Two units of blood were collected from each of 12 adult male Merino sheep and processed into 24 ovine-packed red blood cell (PRBC) units. Baseline haematological parameters of ovine blood and PRBC cells were analysed. Biochemical changes in ovine PRBCs were characterized during the 42-day storage period. Immunological compatibility of the blood was confirmed with sera from potential recipient sheep, using a saline and albumin agglutination cross-match. Following confirmation of compatibility, each recipient sheep (n = 12) was transfused with two units of ovine PRBC. Results Procedures for collecting, processing, cross-matching and transfusing ovine blood were established. Although ovine red blood cells are smaller and higher in number, their mean cell haemoglobin concentration is similar to human red blood cells. Ovine PRBC showed improved storage properties in saline–adenine–glucose–mannitol (SAG-M) compared with previous human PRBC studies. Seventy-six compatibility tests were performed and 17·1% were incompatible. Only cross-match compatible ovine PRBC were transfused and no adverse reactions were observed. Conclusion These findings demonstrate the utility of the ovine model for future blood transfusion studies and highlight the importance of compatibility testing in animal models involving homologous transfusions.
Resumo:
Uncertainty assessments of herbicide losses from rice paddies in Japan associated with local meteorological conditions and water management practices were performed using a pesticide fate and transport model, PCPF-1, under the Monte Carlo (MC) simulation scheme. First, MC simulations were conducted for five different cities with a prescribed water management scenario and a 10-year meteorological dataset of each city. The effectiveness of water management was observed regarding the reduction of pesticide runoff. However, a greater potential of pesticide runoff remained in Western Japan. Secondly, an extended analysis was attempted to evaluate the effects of local water management and meteorological conditions between the Chikugo River basin and the Sakura River basin using uncertainty inputs processed from observed water management data. The results showed that because of more severe rainfall events, significant pesticide runoff occurred in the Chikugo River basin even when appropriate irrigation practices were implemented. © Pesticide Science Society of Japan.