432 resultados para size-selection
em Queensland University of Technology - ePrints Archive
Resumo:
Previously, expected satiety (ES) has been measured using software and two-dimensional pictures presented on a computer screen. In this context, ES is an excellent predictor of self-selected portions, when quantified using similar images and similar software. In the present study we sought to establish the veracity of ES as a predictor of behaviours associated with real foods. Participants (N = 30) used computer software to assess their ES and ideal portion of three familiar foods. A real bowl of one food (pasta and sauce) was then presented and participants self-selected an ideal portion size. They then consumed the portion ad libitum. Additional measures of appetite, expected and actual liking, novelty, and reward, were also taken. Importantly, our screen-based measures of expected satiety and ideal portion size were both significantly related to intake (p < .05). By contrast, measures of liking were relatively poor predictors (p > .05). In addition, consistent with previous studies, the majority (90%) of participants engaged in plate cleaning. Of these, 29.6% consumed more when prompted by the experimenter. Together, these findings further validate the use of screen-based measures to explore determinants of portion-size selection and energy intake in humans.
Resumo:
There are limited studies that describe patient meal preferences in hospital; however this data is critical to develop menus that address satisfaction and nutrition whilst balancing resources. This quality study aimed to determine preferences for meals and snacks to inform a comprehensive menu revision in a large (929 bed) tertiary public hospital. The method was based on Vivanti et al. (2008) with data collected by two final year dietetic students. The first survey comprised 72 questions, achieved a response rate of 68% (n = 192), with the second more focused at 47 questions achieving a higher response rate of 93% (n = 212). Findings showed over half the patients reporting poor or less than normal appetite, 20% describing taste issues, over a third with a LOS >7 days, a third with a MST _ 2 and less than half eating only from the general menu. Soup then toast was most frequently reported as eaten at home when unwell, and whilst most reported not missing any foods when in hospital (25%), steak was most commonly missed. Hot breakfasts were desired by the majority (63%), with over half preferring toast (even if cold). In relation to snacks, nearly half (48%) wanted something more substantial than tea/coffee/biscuits, with sandwiches (54%) and soup (33%) being suggested. Sandwiches at the evening meal were not popular (6%). Difficulties with using cutlery and meal size selection were identified as issues. Findings from this study had high utility and supported a collaborative and evidenced based approach to a successful major menu change for the hospital.
Resumo:
Statistical methods are often used to analyse commercial catch and effort data to provide standardised fishing effort and/or a relative index of fish abundance for input into stock assessment models. Achieving reliable results has proved difficult in Australia's Northern Prawn Fishery (NPF), due to a combination of such factors as the biological characteristics of the animals, some aspects of the fleet dynamics, and the changes in fishing technology. For this set of data, we compared four modelling approaches (linear models, mixed models, generalised estimating equations, and generalised linear models) with respect to the outcomes of the standardised fishing effort or the relative index of abundance. We also varied the number and form of vessel covariates in the models. Within a subset of data from this fishery, modelling correlation structures did not alter the conclusions from simpler statistical models. The random-effects models also yielded similar results. This is because the estimators are all consistent even if the correlation structure is mis-specified, and the data set is very large. However, the standard errors from different models differed, suggesting that different methods have different statistical efficiency. We suggest that there is value in modelling the variance function and the correlation structure, to make valid and efficient statistical inferences and gain insight into the data. We found that fishing power was separable from the indices of prawn abundance only when we offset the impact of vessel characteristics at assumed values from external sources. This may be due to the large degree of confounding within the data, and the extreme temporal changes in certain aspects of individual vessels, the fleet and the fleet dynamics.
Resumo:
There is evidence that many heating, ventilating & air conditioning (HVAC) systems, installed in larger buildings, have more capacity than is ever required to keep the occupants comfortable. This paper explores the reasons why this can occur, by examining a typical brief/design/documentation process. Over-sized HVAC systems cost more to install and operate and may not be able to control thermal comfort as well as a “right-sized” system. These impacts are evaluated, where data exists. Finally, some suggestions are developed to minimise both the extent of, and the negative impacts of, HVAC system over-sizing, for example: • Challenge “rules of thumb” and/or brief requirements which may be out of date. • Conduct an accurate load estimate, using AIRAH design data, specific to project location, and then resist the temptation to apply “safety factors • Use a load estimation program that accounts for thermal storage and diversification of peak loads for each zone and air handling system. • Select chiller sizes and staged or variable speed pumps and fans to ensure good part load performance. • Allow for unknown future tenancies by designing flexibility into the system, not by over-sizing. For example, generous sizing of distribution pipework and ductwork will allow available capacity to be redistributed. • Provide an auxiliary tenant condenser water loop to handle high load areas. • Consider using an Integrated Design Process, build an integrated load and energy use simulation model and test different operational scenarios • Use comprehensive Life Cycle Cost analysis for selection of the most optimal design solutions. This paper is an interim report on the findings of CRC-CI project 2002-051-B, Right-Sizing HVAC Systems, which is due for completion in January 2006.
Resumo:
Single particle analysis (SPA) coupled with high-resolution electron cryo-microscopy is emerging as a powerful technique for the structure determination of membrane protein complexes and soluble macromolecular assemblies. Current estimates suggest that ∼104–105 particle projections are required to attain a 3 Å resolution 3D reconstruction (symmetry dependent). Selecting this number of molecular projections differing in size, shape and symmetry is a rate-limiting step for the automation of 3D image reconstruction. Here, we present SwarmPS, a feature rich GUI based software package to manage large scale, semi-automated particle picking projects. The software provides cross-correlation and edge-detection algorithms. Algorithm-specific parameters are transparently and automatically determined through user interaction with the image, rather than by trial and error. Other features include multiple image handling (∼102), local and global particle selection options, interactive image freezing, automatic particle centering, and full manual override to correct false positives and negatives. SwarmPS is user friendly, flexible, extensible, fast, and capable of exporting boxed out projection images, or particle coordinates, compatible with downstream image processing suites.
Resumo:
This paper presents techniques which can lead to diagnosis of faults in a small size multi-cylinder diesel engine. Preliminary analysis of the acoustic emission (AE) signals is outline, including time-frequency analysis and selection of optimum frequency band.The results of applying mean field independent component analysis (MFICA) to separate the AE root mean square (RMS) signals and the effects of changing parameter values are also outlined. The results on separation of RMS signals show thsi technique has the potential of increasing the probability to successfully identify the AE events associated with the various mechanical events within the combustion process of multi-cylinder diesel engines.
Resumo:
Pronounced phenotypic shifts in island populations are typically attributed to natural selection, but reconstructing heterogeneity in long-term selective regimes remains a challenge. We examined a scenario of divergence proposed for species colonizing a new environment, involving directional selection with a rapid shift to a new optimum and subsequent stabilization. We provide some of the first empirical evidence for this model of evolution using morphological data from three timescales in an island bird, Zosterops lateralis chlorocephalus. In less than four millennia since separation from its mainland counterpart, a substantial increase in body size has occurred and was probably achieved in fewer than 500 generations after colonization. Over four recent decades, morphological traits have fluctuated in size but showed no significant directional trends, suggesting maintenance of a relatively stable phenotype. Finally, estimates of contemporary selection gradients indicated generally weak directional selection. These results provide a rare description of heterogeneity in long-term natural regimes, and caution that observations of current selection may be of limited value in inferring mechanisms of past adaptation due to a lack of constancy even over short time-frames.
Resumo:
Island races of passerine birds display repeated evolution towards larger body size compared with their continental ancestors. The Capricorn silvereye (Zosterops lateralis chlorocephalus) has become up to six phenotypic standard deviations bigger in several morphological measures since colonization of an island approximately 4000 years ago. We estimated the genetic variance-covariance (G) matrix using full-sib and 'animal model' analyses, and selection gradients, for six morphological traits under field conditions in three consecutive cohorts of nestlings. Significant levels of genetic variance were found for all traits. Significant directional selection was detected for wing and tail lengths in one year and quadratic selection on culmen depth in another year. Although selection gradients on many traits were negative, the predicted evolutionary response to selection of these traits for all cohorts was uniformly positive. These results indicate that the G matrix and predicted evolutionary responses are consistent with those of a population evolving in the manner observed in the island passerine trend, that is, towards larger body size.
Resumo:
Laboratory-based studies of human dietary behaviour benefit from highly controlled conditions; however, this approach can lack ecological validity. Identifying a reliable method to capture and quantify natural dietary behaviours represents an important challenge for researchers. In this study, we scrutinised cafeteria-style meals in the ‘Restaurant of the Future.’ Self-selected meals were weighed and photographed, both before and after consumption. Using standard portions of the same foods, these images were independently coded to produce accurate and reliable estimates of (i) initial self-served portions, and (ii) food remaining at the end of the meal. Plate cleaning was extremely common; in 86% of meals at least 90% of self-selected calories were consumed. Males ate a greater proportion of their self-selected meals than did females. Finally, when participants visited the restaurant more than once, the correspondence between selected portions was better predicted by the weight of the meal than by its energy content. These findings illustrate the potential benefits of meal photography in this context. However, they also highlight significant limitations, in particular, the need to exclude large amounts of data when one food obscures another.
Resumo:
The design-build (DB) delivery system is an effective means of delivering a green construction project and selecting an appropriate contractor is critical to project success. Moreover, the delivery of green buildings requires specific design, construction and operation and maintenance considerations not generally encountered in the procurement of conventional buildings. Specifying clear sustainability requirements to potential contractors is particularly important in achieving sustainable project goals. However, many client/owners either do not explicitly specify sustainability requirements or do so in a prescriptive manner during the project procurement process. This paper investigates the current state-of-the-art procurement process used in specifying the sustainability requirements of the public sector in the USA construction market by means of a robust content analysis of 40 design-build requests for proposals (RFPs). The results of the content analysis indicate that the sustainability requirement is one of the most important dimensions in the best-value evaluation of DB contractors. Client/owners predominantly specify the LEED certification levels (e.g. LEED Certified, Silver, Gold, and Platinum) for a particular facility, and include the sustainability requirements as selection criteria (with specific importance weightings) for contractor evolution. Additionally, larger size projects tend to allocate higher importance weightings to sustainability requirements.This study provides public DB client/owners with a number of practical implications for selecting appropriate design-builders for sustainable DB projects.
Resumo:
This paper presents a design technique of a fully regenerative dynamic dynamometer. It incorporates an energy storage system to absorb the energy variation due to dynamometer transients. This allows the minimum power electronics requirement at the grid to supply the losses. The simulation results of the full system over a driving cycle show the amount of energy required to complete a driving cycle, therefore the size of the energy storage system can be determined.
Resumo:
This paper considers the design of a radial flux permanent magnet iron less core brushless DC motor for use in an electric wheel drive with an integrated epicyclic gear reduction. The motor has been designed for a continuous output torque of 30 Nm and peak rating of 60 Nm with a maximum operating speed of 7000 RPM. In the design of brushless DC motors with a toothed iron stator the peak air-gap magnetic flux density is typically chosen to be close to that of the remanence value of the magnets used. This paper demonstrates that for an ironless motor the optimal peak air-gap flux density is closer to the maximum energy product of the magnets used. The use of a radial flux topology allows for high frequency operation and can be shown to give high specific power output while maintaining a relatively low magnet mass. Two-dimensional finite element analysis is used to predict the air-gap flux density. The motor design is based around commonly available NdFeB bar magnet size
Resumo:
This paper considers the design of a radial flux permanent magnet ironless core brushless DC motor for use in an electric wheel drive with an integrated epicyclic gear reduction. The motor has been designed for a continuous output torque of 30 Nm and peak rating of 60 Nm with a maximum operating speed of 7000 RPM. In the design of brushless DC motors with a toothed iron stator the peak air-gap magnetic flux density is typically chosen to be close to that of the remanence value of the magnets used. This paper demonstrates that for an ironless motor the optimal peak air-gap flux density is closer to the maximum energy product of the magnets used. The use of a radial flux topology allows for high frequency operation and can be shown to give high specific power output while maintaining a relatively low magnet mass. Two-dimensional finite element analysis is used to predict the airgap flux density. The motor design is based around commonly available NdFeB bar magnet size
Resumo:
Purpose This work introduces the concept of very small field size. Output factor (OPF) measurements at these field sizes require extremely careful experimental methodology including the measurement of dosimetric field size at the same time as each OPF measurement. Two quantifiable scientific definitions of the threshold of very small field size are presented. Methods A practical definition was established by quantifying the effect that a 1 mm error in field size or detector position had on OPFs, and setting acceptable uncertainties on OPF at 1%. Alternatively, for a theoretical definition of very small field size, the OPFs were separated into additional factors to investigate the specific effects of lateral electronic disequilibrium, photon scatter in the phantom and source occlusion. The dominant effect was established and formed the basis of a theoretical definition of very small fields. Each factor was obtained using Monte Carlo simulations of a Varian iX linear accelerator for various square field sizes of side length from 4 mm to 100 mm, using a nominal photon energy of 6 MV. Results According to the practical definition established in this project, field sizes < 15 mm were considered to be very small for 6 MV beams for maximal field size uncertainties of 1 mm. If the acceptable uncertainty in the OPF was increased from 1.0 % to 2.0 %, or field size uncertainties are 0.5 mm, field sizes < 12 mm were considered to be very small. Lateral electronic disequilibrium in the phantom was the dominant cause of change in OPF at very small field sizes. Thus the theoretical definition of very small field size coincided to the field size at which lateral electronic disequilibrium clearly caused a greater change in OPF than any other effects. This was found to occur at field sizes < 12 mm. Source occlusion also caused a large change in OPF for field sizes < 8 mm. Based on the results of this study, field sizes < 12 mm were considered to be theoretically very small for 6 MV beams. Conclusions Extremely careful experimental methodology including the measurement of dosimetric field size at the same time as output factor measurement for each field size setting and also very precise detector alignment is required at field sizes at least < 12 mm and more conservatively < 15 mm for 6 MV beams. These recommendations should be applied in addition to all the usual considerations for small field dosimetry, including careful detector selection.
Resumo:
Potent and specific enzyme inhibition is a key goal in the development of therapeutic inhibitors targeting proteolytic activity. The backbone-cyclized peptide, Sunflower Trypsin Inhibitor (SFTI-1) affords a scaffold that can be engineered to achieve both these aims. SFTI-1's mechanism of inhibition is unusual in that it shows fast-on/slow-off kinetics driven by cleavage and religation of a scissile bond. This phenomenon was used to select a nanomolar inhibitor of kallikrein-related peptidase 7 (KLK7) from a versatile library of SFTI variants with diversity tailored to exploit distinctive surfaces present in the active site of serine proteases. Inhibitor selection was achieved through the use of size exclusion chromatography to separate protease/inhibitor complexes from unbound inhibitors followed by inhibitor identification according to molecular mass ascertained by mass spectrometry. This approach identified a single dominant inhibitor species with molecular weight of 1562.4 Da, which is consistent with the SFTI variant SFTI-WCTF. Once synthesized individually this inhibitor showed an IC50 of 173.9 ± 7.6 nM against chromogenic substrates and could block protein proteolysis. Molecular modeling analysis suggested that selection of SFTI-WCTF was driven by specific aromatic interactions and stabilized by an enhanced internal hydrogen bonding network. This approach provides a robust and rapid route to inhibitor selection and design.