129 resultados para Data Streams Distribution


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis reports on an investigation to develop an advanced and comprehensive milling process model of the raw sugar factory. Although the new model can be applied to both, the four-roller and six-roller milling units, it is primarily developed for the six-roller mills which are widely used in the Australian sugar industry. The approach taken was to gain an understanding of the previous milling process simulation model "MILSIM" developed at the University of Queensland nearly four decades ago. Although the MILSIM model was widely adopted in the Australian sugar industry for simulating the milling process it did have some incorrect assumptions. The study aimed to eliminate all the incorrect assumptions of the previous model and develop an advanced model that represents the milling process correctly and tracks the flow of other cane components in the milling process which have not been considered in the previous models. The development of the milling process model was done is three stages. Firstly, an enhanced milling unit extraction model (MILEX) was developed to access the mill performance parameters and predict the extraction performance of the milling process. New definitions for the milling performance parameters were developed and a complete milling train along with the juice screen was modelled. The MILEX model was validated with factory data and the variation in the mill performance parameters was observed and studied. Some case studies were undertaken to study the effect of fibre in juice streams, juice in cush return and imbibition% fibre on extraction performance of the milling process. It was concluded from the study that the empirical relations developed for the mill performance parameters in the MILSIM model were not applicable to the new model. New empirical relations have to be developed before the model is applied with confidence. Secondly, a soluble and insoluble solids model was developed using modelling theory and experimental data to track the flow of sucrose (pol), reducing sugars (glucose and fructose), soluble ash, true fibre and mud solids entering the milling train through the cane supply and their distribution in juice and bagasse streams.. The soluble impurities and mud solids in cane affect the performance of the milling train and further processing of juice and bagasse. New mill performance parameters were developed in the model to track the flow of cane components. The developed model is the first of its kind and provides some additional insight regarding the flow of soluble and insoluble cane components and the factors affecting their distribution in juice and bagasse. The model proved to be a good extension to the MILEX model to study the overall performance of the milling train. Thirdly, the developed models were incorporated in a proprietary software package "SysCAD’ for advanced operational efficiency and for availability in the ‘whole of factory’ model. The MILEX model was developed in SysCAD software to represent a single milling unit. Eventually the entire milling train and the juice screen were developed in SysCAD using series of different controllers and features of the software. The models developed in SysCAD can be run from macro enabled excel file and reports can be generated in excel sheets. The flexibility of the software, ease of use and other advantages are described broadly in the relevant chapter. The MILEX model is developed in static mode and dynamic mode. The application of the dynamic mode of the model is still under progress.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Operational modal analysis (OMA) is prevalent in modal identifi cation of civil structures. It asks for response measurements of the underlying structure under ambient loads. A valid OMA method requires the excitation be white noise in time and space. Although there are numerous applications of OMA in the literature, few have investigated the statistical distribution of a measurement and the infl uence of such randomness to modal identifi cation. This research has attempted modifi ed kurtosis to evaluate the statistical distribution of raw measurement data. In addition, a windowing strategy employing this index has been proposed to select quality datasets. In order to demonstrate how the data selection strategy works, the ambient vibration measurements of a laboratory bridge model and a real cable-stayed bridge have been respectively considered. The analysis incorporated with frequency domain decomposition (FDD) as the target OMA approach for modal identifi cation. The modal identifi cation results using the data segments with different randomness have been compared. The discrepancy in FDD spectra of the results indicates that, in order to fulfi l the assumption of an OMA method, special care shall be taken in processing a long vibration measurement data. The proposed data selection strategy is easy-to-apply and verifi ed effective in modal analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: The objective of this study was to describe the distribution of conjunctival ultraviolet autofluorescence (UVAF) in an adult population. METHODS: We conducted a cross-sectional, population-based study in the genetic isolate of Norfolk Island, South Pacific Ocean. In all, 641 people, aged 15 to 89 years, were recruited. UVAF and standard (control) photographs were taken of the nasal and temporal interpalpebral regions bilaterally. Differences between the groups for non-normally distributed continuous variables were assessed using the Wilcoxon-Mann-Whitney ranksum test. Trends across categories were assessed using Cuzick's non-parametric test for trend or Kendall's rank correlation τ. RESULTS: Conjunctival UVAF is a non-parametric trait with a positively skewed distribution. Median amount of conjunctival UVAF per person (sum of four measurements; right nasal/temporal and left nasal/temporal) was 28.2 mm(2) (interquartile range 14.5-48.2). There was an inverse, linear relationship between UVAF and advancing age (P<0.001). Males had a higher sum of UVAF compared with females (34.4 mm(2) vs 23.2 mm(2), P<0.0001). There were no statistically significant differences in area of UVAF between right and left eyes or between nasal and temporal regions. CONCLUSION: We have provided the first quantifiable estimates of conjunctival UVAF in an adult population. Further data are required to provide information about the natural history of UVAF and to characterise other potential disease associations with UVAF. UVR protective strategies should be emphasised at an early age to prevent the long-term adverse effects on health associated with excess UVR.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a method for optical encryption of information, based on the time-dependent dynamics of writing and erasure of refractive index changes in a bulk lithium niobate medium. Information is written into the photorefractive crystal with a spatially amplitude modulated laser beam which when overexposed significantly degrades the stored data making it unrecognizable. We show that the degradation can be reversed and that a one-to-one relationship exists between the degradation and recovery rates. It is shown that this simple relationship can be used to determine the erasure time required for decrypting the scrambled index patterns. In addition, this method could be used as a straightforward general technique for determining characteristic writing and erasure rates in photorefractive media.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background The expansion of cell colonies is driven by a delicate balance of several mechanisms including cell motility, cell-to-cell adhesion and cell proliferation. New approaches that can be used to independently identify and quantify the role of each mechanism will help us understand how each mechanism contributes to the expansion process. Standard mathematical modelling approaches to describe such cell colony expansion typically neglect cell-to-cell adhesion, despite the fact that cell-to-cell adhesion is thought to play an important role. Results We use a combined experimental and mathematical modelling approach to determine the cell diffusivity, D, cell-to-cell adhesion strength, q, and cell proliferation rate, ?, in an expanding colony of MM127 melanoma cells. Using a circular barrier assay, we extract several types of experimental data and use a mathematical model to independently estimate D, q and ?. In our first set of experiments, we suppress cell proliferation and analyse three different types of data to estimate D and q. We find that standard types of data, such as the area enclosed by the leading edge of the expanding colony and more detailed cell density profiles throughout the expanding colony, does not provide sufficient information to uniquely identify D and q. We find that additional data relating to the degree of cell-to-cell clustering is required to provide independent estimates of q, and in turn D. In our second set of experiments, where proliferation is not suppressed, we use data describing temporal changes in cell density to determine the cell proliferation rate. In summary, we find that our experiments are best described using the range D = 161 - 243 ?m2 hour-1, q = 0.3 - 0.5 (low to moderate strength) and ? = 0.0305 - 0.0398 hour-1, and with these parameters we can accurately predict the temporal variations in the spatial extent and cell density profile throughout the expanding melanoma cell colony. Conclusions Our systematic approach to identify the cell diffusivity, cell-to-cell adhesion strength and cell proliferation rate highlights the importance of integrating multiple types of data to accurately quantify the factors influencing the spatial expansion of melanoma cell colonies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spatial organisation of proteins according to their function plays an important role in the specificity of their molecular interactions. Emerging proteomics methods seek to assign proteins to sub-cellular locations by partial separation of organelles and computational analysis of protein abundance distributions among partially separated fractions. Such methods permit simultaneous analysis of unpurified organelles and promise proteome-wide localisation in scenarios wherein perturbation may prompt dynamic re-distribution. Resolving organelles that display similar behavior during a protocol designed to provide partial enrichment represents a possible shortcoming. We employ the Localisation of Organelle Proteins by Isotope Tagging (LOPIT) organelle proteomics platform to demonstrate that combining information from distinct separations of the same material can improve organelle resolution and assignment of proteins to sub-cellular locations. Two previously published experiments, whose distinct gradients are alone unable to fully resolve six known protein-organelle groupings, are subjected to a rigorous analysis to assess protein-organelle association via a contemporary pattern recognition algorithm. Upon straightforward combination of single-gradient data, we observe significant improvement in protein-organelle association via both a non-linear support vector machine algorithm and partial least-squares discriminant analysis. The outcome yields suggestions for further improvements to present organelle proteomics platforms, and a robust analytical methodology via which to associate proteins with sub-cellular organelles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract BACKGROUND: An examination of melanoma incidence according to anatomical region may be one method of monitoring the impact of public health initiatives. OBJECTIVES:   To examine melanoma incidence trends by body site, sex and age at diagnosis or body site and morphology in a population at high risk. MATERIALS AND METHODS:   Population-based data on invasive melanoma cases (n = 51473) diagnosed between 1982 and 2008 were extracted from the Queensland Cancer Registry. Age-standardized incidence rates were calculated using the direct method (2000 world standard population) and joinpoint regression models were used to fit trend lines. RESULTS:   Significantly decreasing trends for melanomas on the trunk and upper limbs/shoulders were observed during recent years for both sexes under the age of 40 years and among males aged 40-59years. However, in the 60 and over age group, the incidence of melanoma is continuing to increase at all sites (apart from the trunk) for males and on the scalp/neck and upper limbs/shoulders for females. Rates of nodular melanoma are currently decreasing on the trunk and lower limbs. In contrast, superficial spreading melanoma is significantly increasing on the scalp/neck and lower limbs, along with substantial increases in lentigo maligna melanoma since the late 1990s at all sites apart from the lower limbs. CONCLUSIONS:   In this large study we have observed significant decreases in rates of invasive melanoma in the younger age groups on less frequently exposed body sites. These results may provide some indirect evidence of the impact of long-running primary prevention campaigns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study considered the problem of predicting survival, based on three alternative models: a single Weibull, a mixture of Weibulls and a cure model. Instead of the common procedure of choosing a single “best” model, where “best” is defined in terms of goodness of fit to the data, a Bayesian model averaging (BMA) approach was adopted to account for model uncertainty. This was illustrated using a case study in which the aim was the description of lymphoma cancer survival with covariates given by phenotypes and gene expression. The results of this study indicate that if the sample size is sufficiently large, one of the three models emerge as having highest probability given the data, as indicated by the goodness of fit measure; the Bayesian information criterion (BIC). However, when the sample size was reduced, no single model was revealed as “best”, suggesting that a BMA approach would be appropriate. Although a BMA approach can compromise on goodness of fit to the data (when compared to the true model), it can provide robust predictions and facilitate more detailed investigation of the relationships between gene expression and patient survival. Keywords: Bayesian modelling; Bayesian model averaging; Cure model; Markov Chain Monte Carlo; Mixture model; Survival analysis; Weibull distribution

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: The built environment is increasingly recognised as being associated with health outcomes. Relationships between the built environment and health differ among age groups, especially between children and adults, but also between younger, mid-age and older adults. Yet few address differences across life stage groups within a single population study. Moreover, existing research mostly focuses on physical activity behaviours, with few studying objective clinical and mental health outcomes. The Life Course Built Environment and Health (LCBEH) project explores the impact of the built environment on self-reported and objectively measured health outcomes in a random sample of people across the life course. Methods and analysis: This cross-sectional data linkage study involves 15 954 children (0–15 years), young adults (16–24 years), adults (25–64 years) and older adults (65+years) from the Perth metropolitan region who completed the Health and Wellbeing Surveillance System survey administered by the Department of Health of Western Australia from 2003 to 2009. Survey data were linked to Western Australia's (WA) Hospital Morbidity Database System (hospital admission) and Mental Health Information System (mental health system outpatient) data. Participants’ residential address was geocoded and features of their ‘neighbourhood’ were measured using Geographic Information Systems software. Associations between the built environment and self-reported and clinical health outcomes will be explored across varying geographic scales and life stages. Ethics and dissemination: The University of Western Australia's Human Research Ethics Committee and the Department of Health of Western Australia approved the study protocol (#2010/1). Findings will be published in peer-reviewed journals and presented at local, national and international conferences, thus contributing to the evidence base informing the design of healthy neighbourhoods for all residents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Agent-based modelling (ABM), like other modelling techniques, is used to answer specific questions from real world systems that could otherwise be expensive or impractical. Its recent gain in popularity can be attributed to some degree to its capacity to use information at a fine level of detail of the system, both geographically and temporally, and generate information at a higher level, where emerging patterns can be observed. This technique is data-intensive, as explicit data at a fine level of detail is used and it is computer-intensive as many interactions between agents, which can learn and have a goal, are required. With the growing availability of data and the increase in computer power, these concerns are however fading. Nonetheless, being able to update or extend the model as more information becomes available can become problematic, because of the tight coupling of the agents and their dependence on the data, especially when modelling very large systems. One large system to which ABM is currently applied is the electricity distribution where thousands of agents representing the network and the consumers’ behaviours are interacting with one another. A framework that aims at answering a range of questions regarding the potential evolution of the grid has been developed and is presented here. It uses agent-based modelling to represent the engineering infrastructure of the distribution network and has been built with flexibility and extensibility in mind. What distinguishes the method presented here from the usual ABMs is that this ABM has been developed in a compositional manner. This encompasses not only the software tool, which core is named MODAM (MODular Agent-based Model) but the model itself. Using such approach enables the model to be extended as more information becomes available or modified as the electricity system evolves, leading to an adaptable model. Two well-known modularity principles in the software engineering domain are information hiding and separation of concerns. These principles were used to develop the agent-based model on top of OSGi and Eclipse plugins which have good support for modularity. Information regarding the model entities was separated into a) assets which describe the entities’ physical characteristics, and b) agents which describe their behaviour according to their goal and previous learning experiences. This approach diverges from the traditional approach where both aspects are often conflated. It has many advantages in terms of reusability of one or the other aspect for different purposes as well as composability when building simulations. For example, the way an asset is used on a network can greatly vary while its physical characteristics are the same – this is the case for two identical battery systems which usage will vary depending on the purpose of their installation. While any battery can be described by its physical properties (e.g. capacity, lifetime, and depth of discharge), its behaviour will vary depending on who is using it and what their aim is. The model is populated using data describing both aspects (physical characteristics and behaviour) and can be updated as required depending on what simulation is to be run. For example, data can be used to describe the environment to which the agents respond to – e.g. weather for solar panels, or to describe the assets and their relation to one another – e.g. the network assets. Finally, when running a simulation, MODAM calls on its module manager that coordinates the different plugins, automates the creation of the assets and agents using factories, and schedules their execution which can be done sequentially or in parallel for faster execution. Building agent-based models in this way has proven fast when adding new complex behaviours, as well as new types of assets. Simulations have been run to understand the potential impact of changes on the network in terms of assets (e.g. installation of decentralised generators) or behaviours (e.g. response to different management aims). While this platform has been developed within the context of a project focussing on the electricity domain, the core of the software, MODAM, can be extended to other domains such as transport which is part of future work with the addition of electric vehicles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Global awareness for cleaner and renewable energy is transforming the electricity sector at many levels. New technologies are being increasingly integrated into the electricity grid at high, medium and low voltage levels, new taxes on carbon emissions are being introduced and individuals can now produce electricity, mainly through rooftop photovoltaic (PV) systems. While leading to improvements, these changes also introduce challenges, and a question that often rises is ‘how can we manage this constantly evolving grid?’ The Queensland Government and Ergon Energy, one of the two Queensland distribution companies, have partnered with some Australian and German universities on a project to answer this question in a holistic manner. The project investigates the impact the integration of renewables and other new technologies has on the physical structure of the grid, and how this evolving system can be managed in a sustainable and economical manner. To aid understanding of what the future might bring, a software platform has been developed that integrates two modelling techniques: agent-based modelling (ABM) to capture the characteristics of the different system units accurately and dynamically, and particle swarm optimization (PSO) to find the most economical mix of network extension and integration of distributed generation over long periods of time. Using data from Ergon Energy, two types of networks (3 phase, and Single Wired Earth Return or SWER) have been modelled; three-phase networks are usually used in dense networks such as urban areas, while SWER networks are widely used in rural Queensland. Simulations can be performed on these networks to identify the required upgrades, following a three-step process: a) what is already in place and how it performs under current and future loads, b) what can be done to manage it and plan the future grid and c) how these upgrades/new installations will perform over time. The number of small-scale distributed generators, e.g. PV and battery, is now sufficient (and expected to increase) to impact the operation of the grid, which in turn needs to be considered by the distribution network manager when planning for upgrades and/or installations to stay within regulatory limits. Different scenarios can be simulated, with different levels of distributed generation, in-place as well as expected, so that a large number of options can be assessed (Step a). Once the location, sizing and timing of assets upgrade and/or installation are found using optimisation techniques (Step b), it is possible to assess the adequacy of their daily performance using agent-based modelling (Step c). One distinguishing feature of this software is that it is possible to analyse a whole area at once, while still having a tailored solution for each of the sub-areas. To illustrate this, using the impact of battery and PV can have on the two types of networks mentioned above, three design conditions can be identified (amongst others): · Urban conditions o Feeders that have a low take-up of solar generators, may benefit from adding solar panels o Feeders that need voltage support at specific times, may be assisted by installing batteries · Rural conditions - SWER network o Feeders that need voltage support as well as peak lopping may benefit from both battery and solar panel installations. This small example demonstrates that no single solution can be applied across all three areas, and there is a need to be selective in which one is applied to each branch of the network. This is currently the function of the engineer who can define various scenarios against a configuration, test them and iterate towards an appropriate solution. Future work will focus on increasing the level of automation in identifying areas where particular solutions are applicable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Synapses onto dendritic spines in the lateral amygdala formed by afferents from the auditory thalamus represent a site of plasticity in Pavlovian fear conditioning. Previous work has demonstrated that thalamic afferents synapse onto LA spines expressing glutamate receptor (GluR) subunits, but the GluR subunit distribution at the synapse and within the cytoplasm has not been characterized. Therefore, we performed a quantitative analysis for α-amino-3-hydroxy-5-methyl-4-isoxazole propionate (AMPA) receptor subunits GluR2 and GluR3 and N-methyl-D-aspartate (NMDA) receptor subunits NR1 and NR2B by combining anterograde labeling of thalamo-amygdaloid afferents with postembedding immunoelectron microscopy for the GluRs in adult rats. A high percentage of thalamo- amygdaloid spines was immunoreactive for GluR2 (80%), GluR3 (83%), and NR1 (83%), while a smaller proportion of spines expressed NR2B (59%). To compare across the various subunits, the cytoplasmic to synaptic ratios of GluRs were measured within thalamo-amygdaloid spines. Analyses revealed that the cytoplasmic pool of GluR2 receptors was twice as large compared to the GluR3, NR1, and NR2B subunits. Our data also show that in the adult brain, the NR2B subunit is expressed in the majority of in thalamo-amygdaloid spines and that within these spines, the various GluRs are differentially distributed between synaptic and non-synaptic sites. The prevalence of the NR2B subunit in thalamo-amygdaloid spines provides morphological evidence supporting its role in the fear conditioning circuit while the differential distribution of the GluR subtypes may reflect distinct roles for their involvement in this circuitry and synaptic plasticity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distributed generation (DG) resources are commonly used in the electric systems to obtain minimum line losses, as one of the benefits of DG, in radial distribution systems. Studies have shown the importance of appropriate selection of location and size of DGs. This paper proposes an analytical method for solving optimal distributed generation placement (ODGP) problem to minimize line losses in radial distribution systems using loss sensitivity factor (LSF) based on bus-injection to branch-current (BIBC) matrix. The proposed method is formulated and tested on 12 and 34 bus radial distribution systems. The classical grid search algorithm based on successive load flows is employed to validate the results. The main advantages of the proposed method as compared with the other conventional methods are the robustness and no need to calculate and invert large admittance or Jacobian matrices. Therefore, the simulation time and the amount of computer memory, required for processing data especially for the large systems, decreases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tumour necrosis factor (TNF)alpha is implicated in the relationship between obesity and insulin resistance/ type 2 diabetes. In an effort to understand this association better we (i) profiled gene expression patterns of TNF, TNFR1 and TNFR2 and (ii) investigated the effects of TNF on glucose uptake in isolated adipocytes and adipose tissue explants from omental and subcutaneous depots from lean, overweight and obese individuals. TNF expression correlated with expression of TNFR2, but not TNFR1, and TNF and TNFR2 expression increased in obesity. TNFR1 expression was higher in omental than in subcutaneous adipocytes. Expression levels of TNF or either receptor did not differ between adipocytes from individuals with central and peripheral obesity. TNF only suppressed glucose uptake in insulin-stimulated subcutaneous tissue and this suppression was only observed in tissue from lean subjects. These data support a relationship between the TNF system and body mass index (BMI), but not fat distribution, and suggest depot specificity of the TNF effect on glucose uptake. Furthermore, adipose tissue from obese subjects already appears insulin 'resistant' and this may be a result of the increased TNF levels.