922 resultados para Data Streams Distribution
Resumo:
It is commonly assumed that rates of accumulation of organic-rich strata have varied through geologic time with some periods that were particularly favorable for accumulation of petroleum source rocks or coals. A rigorous analysis of the validity of such an assumption requires consideration of the basic fact that although sedimentary rocks have been lost through geologic time to erosion and metamorphism. Consequently, their present-day global abundance decreases with their geologic age. Measurements of the global abundance of coal-bearing strata suggest that conditions for coal accumulation were exceptionally favorable during the late Carboniferous. Strata of this age constitute 21% of the world's coal-bearing strata. Global rates of coal accumulation appear to have been relatively constant since the end of the Carboniferous, with the exception of the Triassic which contains only 1.75% of the world's coal-bearing strata. Estimation of the global amount of discovered oil by age of the source rock show that 58% of the world's oil has been sourced from Cretaceous or younger strata and 99% from Silurian or younger strata. Although most geologic periods were favourable for oil source-rock accumulation the mid-Permian to mid-Jurassic appears to have been particularly unfavourable accounting for less than 2% of the world's oil. Estimation of the global amount of discovered natural gas by age of the source rock show that 48% of the world's oil has been sourced from Cretaceous or younger strata and 99% from Silurian or younger strata. The Silurian and Late Carboniferous were particularly favourable for gas source-rock accumulation respectively accounting for 12.9% and 6.9% of the world's gas. By contrast, Permian and Triassic source rocks account for only 1.7% of the world's natural gas. Rather than invoking global climatic or oceanic events to explain the relative abundance of organic rich sediments through time, examination of the data suggests the more critical control is tectonic. The majority of coals are associated with foreland basins and the majority of oil-prone source rocks are associated with rifting. The relative abundance of these types of basin through time determines the abundance and location of coals and petroleum source rocks.
Resumo:
Visual abnormalities, both at the sensory input and the higher interpretive levels, have been associated with many of the symptoms of schizophrenia. Individuals with schizophrenia typically experience distortions of sensory perception, resulting in perceptual hallucinations and delusions that are related to the observed visual deficits. Disorganised speech, thinking and behaviour are commonly experienced by sufferers of the disorder, and have also been attributed to perceptual disturbances associated with anomalies in visual processing. Compounding these issues are marked deficits in cognitive functioning that are observed in approximately 80% of those with schizophrenia. Cognitive impairments associated with schizophrenia include: difficulty with concentration and memory (i.e. working, visual and verbal), an impaired ability to process complex information, response inhibition and deficits in speed of processing, visual and verbal learning. Deficits in sustained attention or vigilance, poor executive functioning such as poor reasoning, problem solving, and social cognition, are all influenced by impaired visual processing. These symptoms impact on the internal perceptual world of those with schizophrenia, and hamper their ability to navigate their external environment. Visual processing abnormalities in schizophrenia are likely to worsen personal, social and occupational functioning. Binocular rivalry provides a unique opportunity to investigate the processes involved in visual awareness and visual perception. Binocular rivalry is the alternation of perceptual images that occurs when conflicting visual stimuli are presented to each eye in the same retinal location. The observer perceives the opposing images in an alternating fashion, despite the sensory input to each eye remaining constant. Binocular rivalry tasks have been developed to investigate specific parts of the visual system. The research presented in this Thesis provides an explorative investigation into binocular rivalry in schizophrenia, using the method of Pettigrew and Miller (1998) and comparing individuals with schizophrenia to healthy controls. This method allows manipulations to the spatial and temporal frequency, luminance contrast and chromaticity of the visual stimuli. Manipulations to the rival stimuli affect the rate of binocular rivalry alternations and the time spent perceiving each image (dominance duration). Binocular rivalry rate and dominance durations provide useful measures to investigate aspects of visual neural processing that lead to the perceptual disturbances and cognitive dysfunction attributed to schizophrenia. However, despite this promise the binocular rivalry phenomenon has not been extensively explored in schizophrenia to date. Following a review of the literature, the research in this Thesis examined individual variation in binocular rivalry. The initial study (Chapter 2) explored the effect of systematically altering the properties of the stimuli (i.e. spatial and temporal frequency, luminance contrast and chromaticity) on binocular rivalry rate and dominance durations in healthy individuals (n=20). The findings showed that altering the stimuli with respect to temporal frequency and luminance contrast significantly affected rate. This is significant as processing of temporal frequency and luminance contrast have consistently been demonstrated to be abnormal in schizophrenia. The current research then explored binocular rivalry in schizophrenia. The primary research question was, "Are binocular rivalry rates and dominance durations recorded in participants with schizophrenia different to those of the controls?" In this second study binocular rivalry data that were collected using low- and highstrength binocular rivalry were compared to alternations recorded during a monocular rivalry task, the Necker Cube task to replicate and advance the work of Miller et al., (2003). Participants with schizophrenia (n=20) recorded fewer alternations (i.e. slower alternation rates) than control participants (n=20) on both binocular rivalry tasks, however no difference was observed between the groups on the Necker cube task. Magnocellular and parvocellular visual pathways, thought to be abnormal in schizophrenia, were also investigated in binocular rivalry. The binocular rivalry stimuli used in this third study (Chapter 4) were altered to bias the task for one of these two pathways. Participants with schizophrenia recorded slower binocular rivalry rates than controls in both binocular rivalry tasks. Using a ‘within subject design’, binocular rivalry data were compared to data collected from a backwardmasking task widely accepted to bias both these pathways. Based on these data, a model of binocular rivalry, based on the magnocellular and parvocellular pathways that contribute to the dorsal and ventral visual streams, was developed. Binocular rivalry rates were compared with performance on the Benton’s Judgment of Line Orientation task, in individuals with schizophrenia compared to healthy controls (Chapter 5). The Benton’s Judgment of Line Orientation task is widely accepted to be processed within the right cerebral hemisphere, making it an appropriate task to investigate the role of the cerebral hemispheres in binocular rivalry, and to investigate the inter-hemispheric switching hypothesis of binocular rivalry proposed by Pettigrew and Miller (1998, 2003). The data were suggestive of intra-hemispheric rather than an inter-hemispheric visual processing in binocular rivalry. Neurotransmitter involvement in binocular rivalry, backward masking and Judgment of Line Orientation in schizophrenia were investigated using a genetic indicator of dopamine receptor distribution and functioning; the presence of the Taq1 allele of the dopamine D2 receptor (DRD2) receptor gene. This final study (Chapter 6) explored whether the presence of the Taq1 allele of the DRD2 receptor gene, and thus, by inference the distribution of dopamine receptors and dopamine function, accounted for the large individual variation in binocular rivalry. The presence of the Taq1 allele was associated with slower binocular rivalry rates or poorer performance in the backward masking and Judgment of Line Orientation tasks seen in the group with schizophrenia. This Thesis has contributed to what is known about binocular rivalry in schizophrenia. Consistently slower binocular rivalry rates were observed in participants with schizophrenia, indicating abnormally-slow visual processing in this group. These data support previous studies reporting visual processing abnormalities in schizophrenia and suggest that a slow binocular rivalry rate is not a feature specific to bipolar disorder, but may be a feature of disorders with psychotic features generally. The contributions of the magnocellular or dorsal pathways and parvocellular or ventral pathways to binocular rivalry, and therefore to perceptual awareness, were investigated. The data presented supported the view that the magnocellular system initiates perceptual awareness of an image and the parvocellular system maintains the perception of the image, making it available to higher level processing occurring within the cortical hemispheres. Abnormal magnocellular and parvocellular processing may both contribute to perceptual disturbances that ultimately contribute to the cognitive dysfunction associated with schizophrenia. An alternative model of binocular rivalry based on these observations was proposed.
Resumo:
As the world’s population is growing, so is the demand for agricultural products. However, natural nitrogen (N) fixation and phosphorus (P) availability cannot sustain the rising agricultural production, thus, the application of N and P fertilisers as additional nutrient sources is common. It is those anthropogenic activities that can contribute high amounts of organic and inorganic nutrients to both surface and groundwaters resulting in degradation of water quality and a possible reduction of aquatic life. In addition, runoff and sewage from urban and residential areas can contain high amounts of inorganic and organic nutrients which may also affect water quality. For example, blooms of the cyanobacterium Lyngbya majuscula along the coastline of southeast Queensland are an indicator of at least short term decreases of water quality. Although Australian catchments, including those with intensive forms of land use, show in general a low export of nutrients compared to North American and European catchments, certain land use practices may still have a detrimental effect on the coastal environment. Numerous studies are reported on nutrient cycling and associated processes on a catchment scale in the Northern Hemisphere. Comparable studies in Australia, in particular in subtropical regions are, however, limited and there is a paucity in the data, in particular for inorganic and organic forms of nitrogen and phosphorus; these nutrients are important limiting factors in surface waters to promote algal blooms. Therefore, the monitoring of N and P and understanding the sources and pathways of these nutrients within a catchment is important in coastal zone management. Although Australia is the driest continent, in subtropical regions such as southeast Queensland, rainfall patterns have a significant effect on runoff and thus the nutrient cycle at a catchment scale. Increasingly, these rainfall patterns are becoming variable. The monitoring of these climatic conditions and the hydrological response of agricultural catchments is therefore also important to reduce the anthropogenic effects on surface and groundwater quality. This study consists of an integrated hydrological–hydrochemical approach that assesses N and P in an environment with multiple land uses. The main aim is to determine the nutrient cycle within a representative coastal catchment in southeast Queensland, the Elimbah Creek catchment. In particular, the investigation confirms the influence associated with forestry and agriculture on N and P forms, sources, distribution and fate in the surface and groundwaters of this subtropical setting. In addition, the study determines whether N and P are subject to transport into the adjacent estuary and thus into the marine environment; also considered is the effect of local topography, soils and geology on N and P sources and distribution. The thesis is structured on four components individually reported. The first paper determines the controls of catchment settings and processes on stream water, riverbank sediment, and shallow groundwater N and P concentrations, in particular during the extended dry conditions that were encountered during the study. Temporal and spatial factors such as seasonal changes, soil character, land use and catchment morphology are considered as well as their effect on controls over distributions of N and P in surface waters and associated groundwater. A total number of 30 surface and 13 shallow groundwater sampling sites were established throughout the catchment to represent dominant soil types and the land use upstream of each sampling location. Sampling comprises five rounds and was conducted over one year between October 2008 and November 2009. Surface water and groundwater samples were analysed for all major dissolved inorganic forms of N and for total N. Phosphorus was determined in the form of dissolved reactive P (predominantly orthophosphate) and total P. In addition, extracts of stream bank sediments and soil grab samples were analysed for these N and P species. Findings show that major storm events, in particular after long periods of drought conditions, are the driving force of N cycling. This is expressed by higher inorganic N concentrations in the agricultural subcatchment compared to the forested subcatchment. Nitrate N is the dominant inorganic form of N in both the surface and groundwaters and values are significantly higher in the groundwaters. Concentrations in the surface water range from 0.03 to 0.34 mg N L..1; organic N concentrations are considerably higher (average range: 0.33 to 0.85 mg N L..1), in particular in the forested subcatchment. Average NO3-N in the groundwater has a range of 0.39 to 2.08 mg N L..1, and organic N averages between 0.07 and 0.3 mg N L..1. The stream bank sediments are dominated by organic N (range: 0.53 to 0.65 mg N L..1), and the dominant inorganic form of N is NH4-N with values ranging between 0.38 and 0.41 mg N L..1. Topography and soils, however, were not to have a significant effect on N and P concentrations in waters. Detectable phosphorus in the surface and groundwaters of the catchment is limited to several locations typically in the proximity of areas with intensive animal use; in soil and sediments, P is negligible. In the second paper, the stable isotopes of N (14N/15N) and H2O (16O/18O and 2H/H) in surface and groundwaters are used to identify sources of dissolved inorganic and organic N in these waters, and to determine their pathways within the catchment; specific emphasis is placed on the relation of forestry and agriculture. Forestry is predominantly concentrated in the northern subcatchment (Beerburrum Creek) while agriculture is mainly found in the southern subcatchment (Six Mile Creek). Results show that agriculture (horticulture, crops, grazing) is the main source of inorganic N in the surface waters of the agricultural subcatchment, and their isotopic signature shows a close link to evaporation processes that may occur during water storage in farm dams that are used for irrigation. Groundwaters are subject to denitrification processes that may result in reduced dissolved inorganic N concentrations. Soil organic matter delivers most of the inorganic N to the surface water in the forested subcatchment. Here, precipitation and subsequently runoff is the main source of the surface waters. Groundwater in this area is affected by agricultural processes. The findings also show that the catchment can attenuate the effects of anthropogenic land use on surface water quality. Riparian strips of natural remnant vegetation, commonly 50 to 100 m in width, act as buffer zones along the drainage lines in the catchment and remove inorganic N from the soil water before it enters the creek. These riparian buffer zones are common in most agricultural catchments of southeast Queensland and are indicated to reduce the impact of agriculture on stream water quality and subsequently on the estuary and marine environments. This reduction is expressed by a significant decrease in DIN concentrations from 1.6 mg N L..1 to 0.09 mg N L..1, and a decrease in the �15N signatures from upstream surface water locations downstream to the outlet of the agricultural subcatchment. Further testing is, however, necessary to confirm these processes. Most importantly, the amount of N that is transported to the adjacent estuary is shown to be negligible. The third and fourth components of the thesis use a hydrological catchment model approach to determine the water balance of the Elimbah Creek catchment. The model is then used to simulate the effects of land use on the water balance and nutrient loads of the study area. The tool that is used is the internationally widely applied Soil and Water Assessment Tool (SWAT). Knowledge about the water cycle of a catchment is imperative in nutrient studies as processes such as rainfall, surface runoff, soil infiltration and routing of water through the drainage system are the driving forces of the catchment nutrient cycle. Long-term information about discharge volumes of the creeks and rivers do, however, not exist for a number of agricultural catchments in southeast Queensland, and such information is necessary to calibrate and validate numerical models. Therefore, a two-step modelling approach was used to calibrate and validate parameters values from a near-by gauged reference catchment as starting values for the ungauged Elimbah Creek catchment. Transposing monthly calibrated and validated parameter values from the reference catchment to the ungauged catchment significantly improved model performance showing that the hydrological model of the catchment of interest is a strong predictor of the water water balance. The model efficiency coefficient EF shows that 94% of the simulated discharge matches the observed flow whereas only 54% of the observed streamflow was simulated by the SWAT model prior to using the validated values from the reference catchment. In addition, the hydrological model confirmed that total surface runoff contributes the majority of flow to the surface water in the catchment (65%). Only a small proportion of the water in the creek is contributed by total base-flow (35%). This finding supports the results of the stable isotopes 16O/18O and 2H/H, which show the main source of water in the creeks is either from local precipitation or irrigation waters delivered by surface runoff; a contribution from the groundwater (baseflow) to the creeks could not be identified using 16O/18O and 2H/H. In addition, the SWAT model calculated that around 68% of the rainfall occurring in the catchment is lost through evapotranspiration reflecting the prevailing long-term drought conditions that were observed prior and during the study. Stream discharge from the forested subcatchment was an order of magnitude lower than discharge from the agricultural Six Mile Creek subcatchment. A change in land use from forestry to agriculture did not significantly change the catchment water balance, however, nutrient loads increased considerably. Conversely, a simulated change from agriculture to forestry resulted in a significant decrease of nitrogen loads. The findings of the thesis and the approach used are shown to be of value to catchment water quality monitoring on a wider scale, in particular the implications of mixed land use on nutrient forms, distributions and concentrations. The study confirms that in the tropics and subtropics the water balance is affected by extended dry periods and seasonal rainfall with intensive storm events. In particular, the comprehensive data set of inorganic and organic N and P forms in the surface and groundwaters of this subtropical setting acquired during the one year sampling program may be used in similar catchment hydrological studies where these detailed information is missing. Also, the study concludes that riparian buffer zones along the catchment drainage system attenuate the transport of nitrogen from agricultural sources in the surface water. Concentrations of N decreased from upstream to downstream locations and were negligible at the outlet of the catchment.
Resumo:
This thesis reports on an investigation to develop an advanced and comprehensive milling process model of the raw sugar factory. Although the new model can be applied to both, the four-roller and six-roller milling units, it is primarily developed for the six-roller mills which are widely used in the Australian sugar industry. The approach taken was to gain an understanding of the previous milling process simulation model "MILSIM" developed at the University of Queensland nearly four decades ago. Although the MILSIM model was widely adopted in the Australian sugar industry for simulating the milling process it did have some incorrect assumptions. The study aimed to eliminate all the incorrect assumptions of the previous model and develop an advanced model that represents the milling process correctly and tracks the flow of other cane components in the milling process which have not been considered in the previous models. The development of the milling process model was done is three stages. Firstly, an enhanced milling unit extraction model (MILEX) was developed to access the mill performance parameters and predict the extraction performance of the milling process. New definitions for the milling performance parameters were developed and a complete milling train along with the juice screen was modelled. The MILEX model was validated with factory data and the variation in the mill performance parameters was observed and studied. Some case studies were undertaken to study the effect of fibre in juice streams, juice in cush return and imbibition% fibre on extraction performance of the milling process. It was concluded from the study that the empirical relations developed for the mill performance parameters in the MILSIM model were not applicable to the new model. New empirical relations have to be developed before the model is applied with confidence. Secondly, a soluble and insoluble solids model was developed using modelling theory and experimental data to track the flow of sucrose (pol), reducing sugars (glucose and fructose), soluble ash, true fibre and mud solids entering the milling train through the cane supply and their distribution in juice and bagasse streams.. The soluble impurities and mud solids in cane affect the performance of the milling train and further processing of juice and bagasse. New mill performance parameters were developed in the model to track the flow of cane components. The developed model is the first of its kind and provides some additional insight regarding the flow of soluble and insoluble cane components and the factors affecting their distribution in juice and bagasse. The model proved to be a good extension to the MILEX model to study the overall performance of the milling train. Thirdly, the developed models were incorporated in a proprietary software package "SysCAD’ for advanced operational efficiency and for availability in the ‘whole of factory’ model. The MILEX model was developed in SysCAD software to represent a single milling unit. Eventually the entire milling train and the juice screen were developed in SysCAD using series of different controllers and features of the software. The models developed in SysCAD can be run from macro enabled excel file and reports can be generated in excel sheets. The flexibility of the software, ease of use and other advantages are described broadly in the relevant chapter. The MILEX model is developed in static mode and dynamic mode. The application of the dynamic mode of the model is still under progress.
Resumo:
Operational modal analysis (OMA) is prevalent in modal identifi cation of civil structures. It asks for response measurements of the underlying structure under ambient loads. A valid OMA method requires the excitation be white noise in time and space. Although there are numerous applications of OMA in the literature, few have investigated the statistical distribution of a measurement and the infl uence of such randomness to modal identifi cation. This research has attempted modifi ed kurtosis to evaluate the statistical distribution of raw measurement data. In addition, a windowing strategy employing this index has been proposed to select quality datasets. In order to demonstrate how the data selection strategy works, the ambient vibration measurements of a laboratory bridge model and a real cable-stayed bridge have been respectively considered. The analysis incorporated with frequency domain decomposition (FDD) as the target OMA approach for modal identifi cation. The modal identifi cation results using the data segments with different randomness have been compared. The discrepancy in FDD spectra of the results indicates that, in order to fulfi l the assumption of an OMA method, special care shall be taken in processing a long vibration measurement data. The proposed data selection strategy is easy-to-apply and verifi ed effective in modal analysis.
Resumo:
OBJECTIVE: The objective of this study was to describe the distribution of conjunctival ultraviolet autofluorescence (UVAF) in an adult population. METHODS: We conducted a cross-sectional, population-based study in the genetic isolate of Norfolk Island, South Pacific Ocean. In all, 641 people, aged 15 to 89 years, were recruited. UVAF and standard (control) photographs were taken of the nasal and temporal interpalpebral regions bilaterally. Differences between the groups for non-normally distributed continuous variables were assessed using the Wilcoxon-Mann-Whitney ranksum test. Trends across categories were assessed using Cuzick's non-parametric test for trend or Kendall's rank correlation τ. RESULTS: Conjunctival UVAF is a non-parametric trait with a positively skewed distribution. Median amount of conjunctival UVAF per person (sum of four measurements; right nasal/temporal and left nasal/temporal) was 28.2 mm(2) (interquartile range 14.5-48.2). There was an inverse, linear relationship between UVAF and advancing age (P<0.001). Males had a higher sum of UVAF compared with females (34.4 mm(2) vs 23.2 mm(2), P<0.0001). There were no statistically significant differences in area of UVAF between right and left eyes or between nasal and temporal regions. CONCLUSION: We have provided the first quantifiable estimates of conjunctival UVAF in an adult population. Further data are required to provide information about the natural history of UVAF and to characterise other potential disease associations with UVAF. UVR protective strategies should be emphasised at an early age to prevent the long-term adverse effects on health associated with excess UVR.
Resumo:
We present a method for optical encryption of information, based on the time-dependent dynamics of writing and erasure of refractive index changes in a bulk lithium niobate medium. Information is written into the photorefractive crystal with a spatially amplitude modulated laser beam which when overexposed significantly degrades the stored data making it unrecognizable. We show that the degradation can be reversed and that a one-to-one relationship exists between the degradation and recovery rates. It is shown that this simple relationship can be used to determine the erasure time required for decrypting the scrambled index patterns. In addition, this method could be used as a straightforward general technique for determining characteristic writing and erasure rates in photorefractive media.
Resumo:
Background The expansion of cell colonies is driven by a delicate balance of several mechanisms including cell motility, cell-to-cell adhesion and cell proliferation. New approaches that can be used to independently identify and quantify the role of each mechanism will help us understand how each mechanism contributes to the expansion process. Standard mathematical modelling approaches to describe such cell colony expansion typically neglect cell-to-cell adhesion, despite the fact that cell-to-cell adhesion is thought to play an important role. Results We use a combined experimental and mathematical modelling approach to determine the cell diffusivity, D, cell-to-cell adhesion strength, q, and cell proliferation rate, ?, in an expanding colony of MM127 melanoma cells. Using a circular barrier assay, we extract several types of experimental data and use a mathematical model to independently estimate D, q and ?. In our first set of experiments, we suppress cell proliferation and analyse three different types of data to estimate D and q. We find that standard types of data, such as the area enclosed by the leading edge of the expanding colony and more detailed cell density profiles throughout the expanding colony, does not provide sufficient information to uniquely identify D and q. We find that additional data relating to the degree of cell-to-cell clustering is required to provide independent estimates of q, and in turn D. In our second set of experiments, where proliferation is not suppressed, we use data describing temporal changes in cell density to determine the cell proliferation rate. In summary, we find that our experiments are best described using the range D = 161 - 243 ?m2 hour-1, q = 0.3 - 0.5 (low to moderate strength) and ? = 0.0305 - 0.0398 hour-1, and with these parameters we can accurately predict the temporal variations in the spatial extent and cell density profile throughout the expanding melanoma cell colony. Conclusions Our systematic approach to identify the cell diffusivity, cell-to-cell adhesion strength and cell proliferation rate highlights the importance of integrating multiple types of data to accurately quantify the factors influencing the spatial expansion of melanoma cell colonies.
Resumo:
Spatial organisation of proteins according to their function plays an important role in the specificity of their molecular interactions. Emerging proteomics methods seek to assign proteins to sub-cellular locations by partial separation of organelles and computational analysis of protein abundance distributions among partially separated fractions. Such methods permit simultaneous analysis of unpurified organelles and promise proteome-wide localisation in scenarios wherein perturbation may prompt dynamic re-distribution. Resolving organelles that display similar behavior during a protocol designed to provide partial enrichment represents a possible shortcoming. We employ the Localisation of Organelle Proteins by Isotope Tagging (LOPIT) organelle proteomics platform to demonstrate that combining information from distinct separations of the same material can improve organelle resolution and assignment of proteins to sub-cellular locations. Two previously published experiments, whose distinct gradients are alone unable to fully resolve six known protein-organelle groupings, are subjected to a rigorous analysis to assess protein-organelle association via a contemporary pattern recognition algorithm. Upon straightforward combination of single-gradient data, we observe significant improvement in protein-organelle association via both a non-linear support vector machine algorithm and partial least-squares discriminant analysis. The outcome yields suggestions for further improvements to present organelle proteomics platforms, and a robust analytical methodology via which to associate proteins with sub-cellular organelles.
Resumo:
Abstract BACKGROUND: An examination of melanoma incidence according to anatomical region may be one method of monitoring the impact of public health initiatives. OBJECTIVES: To examine melanoma incidence trends by body site, sex and age at diagnosis or body site and morphology in a population at high risk. MATERIALS AND METHODS: Population-based data on invasive melanoma cases (n = 51473) diagnosed between 1982 and 2008 were extracted from the Queensland Cancer Registry. Age-standardized incidence rates were calculated using the direct method (2000 world standard population) and joinpoint regression models were used to fit trend lines. RESULTS: Significantly decreasing trends for melanomas on the trunk and upper limbs/shoulders were observed during recent years for both sexes under the age of 40 years and among males aged 40-59years. However, in the 60 and over age group, the incidence of melanoma is continuing to increase at all sites (apart from the trunk) for males and on the scalp/neck and upper limbs/shoulders for females. Rates of nodular melanoma are currently decreasing on the trunk and lower limbs. In contrast, superficial spreading melanoma is significantly increasing on the scalp/neck and lower limbs, along with substantial increases in lentigo maligna melanoma since the late 1990s at all sites apart from the lower limbs. CONCLUSIONS: In this large study we have observed significant decreases in rates of invasive melanoma in the younger age groups on less frequently exposed body sites. These results may provide some indirect evidence of the impact of long-running primary prevention campaigns.
Resumo:
This study considered the problem of predicting survival, based on three alternative models: a single Weibull, a mixture of Weibulls and a cure model. Instead of the common procedure of choosing a single “best” model, where “best” is defined in terms of goodness of fit to the data, a Bayesian model averaging (BMA) approach was adopted to account for model uncertainty. This was illustrated using a case study in which the aim was the description of lymphoma cancer survival with covariates given by phenotypes and gene expression. The results of this study indicate that if the sample size is sufficiently large, one of the three models emerge as having highest probability given the data, as indicated by the goodness of fit measure; the Bayesian information criterion (BIC). However, when the sample size was reduced, no single model was revealed as “best”, suggesting that a BMA approach would be appropriate. Although a BMA approach can compromise on goodness of fit to the data (when compared to the true model), it can provide robust predictions and facilitate more detailed investigation of the relationships between gene expression and patient survival. Keywords: Bayesian modelling; Bayesian model averaging; Cure model; Markov Chain Monte Carlo; Mixture model; Survival analysis; Weibull distribution
Resumo:
Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.
Resumo:
Introduction: The built environment is increasingly recognised as being associated with health outcomes. Relationships between the built environment and health differ among age groups, especially between children and adults, but also between younger, mid-age and older adults. Yet few address differences across life stage groups within a single population study. Moreover, existing research mostly focuses on physical activity behaviours, with few studying objective clinical and mental health outcomes. The Life Course Built Environment and Health (LCBEH) project explores the impact of the built environment on self-reported and objectively measured health outcomes in a random sample of people across the life course. Methods and analysis: This cross-sectional data linkage study involves 15 954 children (0–15 years), young adults (16–24 years), adults (25–64 years) and older adults (65+years) from the Perth metropolitan region who completed the Health and Wellbeing Surveillance System survey administered by the Department of Health of Western Australia from 2003 to 2009. Survey data were linked to Western Australia's (WA) Hospital Morbidity Database System (hospital admission) and Mental Health Information System (mental health system outpatient) data. Participants’ residential address was geocoded and features of their ‘neighbourhood’ were measured using Geographic Information Systems software. Associations between the built environment and self-reported and clinical health outcomes will be explored across varying geographic scales and life stages. Ethics and dissemination: The University of Western Australia's Human Research Ethics Committee and the Department of Health of Western Australia approved the study protocol (#2010/1). Findings will be published in peer-reviewed journals and presented at local, national and international conferences, thus contributing to the evidence base informing the design of healthy neighbourhoods for all residents.
Resumo:
Agent-based modelling (ABM), like other modelling techniques, is used to answer specific questions from real world systems that could otherwise be expensive or impractical. Its recent gain in popularity can be attributed to some degree to its capacity to use information at a fine level of detail of the system, both geographically and temporally, and generate information at a higher level, where emerging patterns can be observed. This technique is data-intensive, as explicit data at a fine level of detail is used and it is computer-intensive as many interactions between agents, which can learn and have a goal, are required. With the growing availability of data and the increase in computer power, these concerns are however fading. Nonetheless, being able to update or extend the model as more information becomes available can become problematic, because of the tight coupling of the agents and their dependence on the data, especially when modelling very large systems. One large system to which ABM is currently applied is the electricity distribution where thousands of agents representing the network and the consumers’ behaviours are interacting with one another. A framework that aims at answering a range of questions regarding the potential evolution of the grid has been developed and is presented here. It uses agent-based modelling to represent the engineering infrastructure of the distribution network and has been built with flexibility and extensibility in mind. What distinguishes the method presented here from the usual ABMs is that this ABM has been developed in a compositional manner. This encompasses not only the software tool, which core is named MODAM (MODular Agent-based Model) but the model itself. Using such approach enables the model to be extended as more information becomes available or modified as the electricity system evolves, leading to an adaptable model. Two well-known modularity principles in the software engineering domain are information hiding and separation of concerns. These principles were used to develop the agent-based model on top of OSGi and Eclipse plugins which have good support for modularity. Information regarding the model entities was separated into a) assets which describe the entities’ physical characteristics, and b) agents which describe their behaviour according to their goal and previous learning experiences. This approach diverges from the traditional approach where both aspects are often conflated. It has many advantages in terms of reusability of one or the other aspect for different purposes as well as composability when building simulations. For example, the way an asset is used on a network can greatly vary while its physical characteristics are the same – this is the case for two identical battery systems which usage will vary depending on the purpose of their installation. While any battery can be described by its physical properties (e.g. capacity, lifetime, and depth of discharge), its behaviour will vary depending on who is using it and what their aim is. The model is populated using data describing both aspects (physical characteristics and behaviour) and can be updated as required depending on what simulation is to be run. For example, data can be used to describe the environment to which the agents respond to – e.g. weather for solar panels, or to describe the assets and their relation to one another – e.g. the network assets. Finally, when running a simulation, MODAM calls on its module manager that coordinates the different plugins, automates the creation of the assets and agents using factories, and schedules their execution which can be done sequentially or in parallel for faster execution. Building agent-based models in this way has proven fast when adding new complex behaviours, as well as new types of assets. Simulations have been run to understand the potential impact of changes on the network in terms of assets (e.g. installation of decentralised generators) or behaviours (e.g. response to different management aims). While this platform has been developed within the context of a project focussing on the electricity domain, the core of the software, MODAM, can be extended to other domains such as transport which is part of future work with the addition of electric vehicles.
Resumo:
Global awareness for cleaner and renewable energy is transforming the electricity sector at many levels. New technologies are being increasingly integrated into the electricity grid at high, medium and low voltage levels, new taxes on carbon emissions are being introduced and individuals can now produce electricity, mainly through rooftop photovoltaic (PV) systems. While leading to improvements, these changes also introduce challenges, and a question that often rises is ‘how can we manage this constantly evolving grid?’ The Queensland Government and Ergon Energy, one of the two Queensland distribution companies, have partnered with some Australian and German universities on a project to answer this question in a holistic manner. The project investigates the impact the integration of renewables and other new technologies has on the physical structure of the grid, and how this evolving system can be managed in a sustainable and economical manner. To aid understanding of what the future might bring, a software platform has been developed that integrates two modelling techniques: agent-based modelling (ABM) to capture the characteristics of the different system units accurately and dynamically, and particle swarm optimization (PSO) to find the most economical mix of network extension and integration of distributed generation over long periods of time. Using data from Ergon Energy, two types of networks (3 phase, and Single Wired Earth Return or SWER) have been modelled; three-phase networks are usually used in dense networks such as urban areas, while SWER networks are widely used in rural Queensland. Simulations can be performed on these networks to identify the required upgrades, following a three-step process: a) what is already in place and how it performs under current and future loads, b) what can be done to manage it and plan the future grid and c) how these upgrades/new installations will perform over time. The number of small-scale distributed generators, e.g. PV and battery, is now sufficient (and expected to increase) to impact the operation of the grid, which in turn needs to be considered by the distribution network manager when planning for upgrades and/or installations to stay within regulatory limits. Different scenarios can be simulated, with different levels of distributed generation, in-place as well as expected, so that a large number of options can be assessed (Step a). Once the location, sizing and timing of assets upgrade and/or installation are found using optimisation techniques (Step b), it is possible to assess the adequacy of their daily performance using agent-based modelling (Step c). One distinguishing feature of this software is that it is possible to analyse a whole area at once, while still having a tailored solution for each of the sub-areas. To illustrate this, using the impact of battery and PV can have on the two types of networks mentioned above, three design conditions can be identified (amongst others): · Urban conditions o Feeders that have a low take-up of solar generators, may benefit from adding solar panels o Feeders that need voltage support at specific times, may be assisted by installing batteries · Rural conditions - SWER network o Feeders that need voltage support as well as peak lopping may benefit from both battery and solar panel installations. This small example demonstrates that no single solution can be applied across all three areas, and there is a need to be selective in which one is applied to each branch of the network. This is currently the function of the engineer who can define various scenarios against a configuration, test them and iterate towards an appropriate solution. Future work will focus on increasing the level of automation in identifying areas where particular solutions are applicable.