11 resultados para cooking losses
em University of Queensland eSpace - Australia
Resumo:
Evidence indicates that cruciferous vegetables are protective against a range of cancers with glucosinolates and their breakdown products considered the biologically active constituents. To date, epidemiological studies have not investigated the intakes of these constituents due to a lack of food composition databases. The aim of the present study was to develop a database for the glucosinolate content of cruciferous vegetables that can be used to quantify dietary exposure for use in epidemiological studies of diet-disease relationships. Published food composition data sources for the glucosinolate content of cruciferous vegetables were identified and assessed for data quality using established criteria. Adequate data for the total glucosinolate content were available from eighteen published studies providing 140 estimates for forty-two items. The highest glucosinolate values were for cress (389 mg/100 g) while the lowest values were for Pe-tsai chinese cabbage (20 mg/100 g). There is considerable variation in the values reported for the same vegetable by different studies, with a median difference between the minimum and maximum values of 5.8-fold. Limited analysis of cooked cruciferous vegetables has been conducted; however, the available data show that average losses during cooking are approximately 36 %. This is the first attempt to collate the available literature on the glucosinolate content of cruciferous vegetables. These data will allow quantification of intakes of the glucosinolates, which can be used in epidemiological studies to investigate the role of cruciferous vegetables in cancer aetiology and prevention.
Resumo:
The thermal properties of soft and hard wheat grains, cooked in a steam pressure cooker, as a function of cooking temperature and time were investigated by modulated temperature differential scanning calorimetry (MTDSC). Four cooking temperatures (110, 120, 130 and 140 degrees C) and six cooking times (20, 40, 60, 80, 100 and 120 min) for each temperature were studied. It was found that typical non-reversible heat flow thermograms of cooked and uncooked wheat grains consisted of two endothermic baseline shifts localised around 40-50 degrees C and then 60-70 degrees C. The second peaks of non-reversible heat flow thermograms (60-70 degrees C) were associated with starch gelatinisation. The degree of gelatinisation was quantified based on these peaks. In this study, starch was completely gelatinised within 60-80 min for cooking temperatures at 110-120 degrees C and within 20 min for cooking temperatures at 130-140 degrees C. MTDSC detected reversible endothermic baseline shifts in most samples, localised broadly around 48-67 degrees C with changes in heat capacity ranging from 0.02 to 0.06 J/g per degrees C. These reversible endothermic baseline shifts are related to the glass transition, which occurs during starch gelatinisation. Data on the specific heat capacity of the cooked wheat samples are provided. (C) 2005 Elsevier Ltd. All rights reserved.
Resumo:
We develop a model for exponential decay of broadband pulses, and examine its implications for experiments on optical precursors. One of the signature features of Brillouin precursors is attenuation with a less rapid decay than that predicted by Beer's Law. Depending on the pulse parameters and the model that is adopted for the dielectric properties of the medium, the limiting z-dependence of the loss has been described as z(-1/2), z(-1/3), exponential, or, in more detailed descriptions, some combination of the above. Experimental results in the search for precursors are examined in light of the different models, and a stringent test for sub-exponential decay is applied to data on propagation of 500 femtosecond pulses through 1-5 meters of water. (C) 2005 Optical Society of America.
Resumo:
The objective of this study was to predict the number of cases of pressure ulcer, the bed days lost, and the economic value of these losses at Australian public hospitals. All adults (>= 18 years of age) with a minimum stay of 1 night and discharged from selected clinical units from all Australian public hospitals in 2001-02 were included in the study. The main outcome measures were the number of cases of pressure ulcer, bed days lost to pressure ulcer, and economic value of these losses. We predict a median of 95,695 cases of pressure ulcer with a median of 398,432 bed days lost, incurring median opportunity costs of AU$285 M. The number of cases, and so costs, were greatest in New South Wales and lowest in Australian Capitol Territory. We conclude that pressure ulcers represent a serious clinical and economic problem for a resource-constrained public hospital system. The most cost-effective, risk-reducing interventions should be pursued up to a point where the marginal benefit of prevention is equalized with marginal cost. By preventing pressure ulcers, public hospitals can improve efficiency and the quality of the patient's experience and health outcome.
Resumo:
The diversity of the networks (wired/wireless) prefers a TCP solution robust across a wide range of networks rather than fine-tuned for a particular one at the cost of another. TCP parallelization uses multiple virtual TCP connections to transfer data for an application process and opens a way to improve TCP performance across a wide range of environments - high bandwidth-delay product (BDP), wireless as well as conventional networks. In particular, it can significantly benefit the emerging high-speed wireless networks. Despite its potential to work well over a wide range of networks, it is not fully understood how TCP parallelization performs when experiencing various packet losses in the heterogeneous environment. This paper examines the current TCP parallelization related methods under various packet losses and shows how to improve the performance of TCP parallelization.
Resumo:
Non-technical losses (NTL) identification and prediction are important tasks for many utilities. Data from customer information system (CIS) can be used for NTL analysis. However, in order to accurately and efficiently perform NTL analysis, the original data from CIS need to be pre-processed before any detailed NTL analysis can be carried out. In this paper, we propose a feature selection based method for CIS data pre-processing in order to extract the most relevant information for further analysis such as clustering and classifications. By removing irrelevant and redundant features, feature selection is an essential step in data mining process in finding optimal subset of features to improve the quality of result by giving faster time processing, higher accuracy and simpler results with fewer features. Detailed feature selection analysis is presented in the paper. Both time-domain and load shape data are compared based on the accuracy, consistency and statistical dependencies between features.