972 resultados para behavioral models
Resumo:
Background: While weight gain following breast cancer is considered common, results supporting these findings are dated. This work describes changes in body weight following breast cancer over 72 months, compares weight with normative data and explores whether weight changes over time are associated with personal, diagnostic, treatment or behavioral characteristics. Methods: A population-based sample of 287 Australian women diagnosed with early-stage invasive breast cancer was assessed prospectively at six, 12, 18 and 72 months post-surgery. Weight was clinically measured and linear mixed models were used to explore associations between weight and participant characteristics (collected via self-administered questionnaire). Those with BMI changes of one or more units were considered to have experienced clinically significant changes in weight. Results: More than half (57%) of participants were overweight or obese at 6 months post-surgery, and by 72 months post-surgery 68% of women were overweight or obese. Among those who gained more weight than age-matched norms, clinically significant weight gain between 6 and 18 months and 6 and 72 months post-surgery was observed in 24% and 39% of participants, respectively (median [range] weight gain: 3.9kg [2.0-11.3kg] and 5.2kg [0.6-28.7], respectively). Clinically-significant weight losses were observed in up to 24% of the sample (median [range] weight loss between 6 and 72 months post-surgery: -6.4kg [-1.9--24.6kg]). More extensive lymph node removal, being treated on the non-dominant side, receiving radiation therapy and lower physical activity levels at 6 months was associated with higher body weights post-breast cancer (group differences >3kg; all p<0.05). Conclusions: While average weight gain among breast cancer survivors in the long-term is small, subgroups of women experience greater gains linked with adverse health and above that experienced by age-matched counterparts. Weight change post-breast cancer is a contemporary public health issue and the integration of healthy weight education and support into standard breast cancer care has potential to significantly improve the length and quality of cancer survivorship.
Resumo:
Iterative computational models have been used to investigate the regulation of bone fracture healing by local mechanical conditions. Although their predictions replicate some mechanical responses and histological features, they do not typically reproduce the predominantly radial hard callus growth pattern observed in larger mammals. We hypothesised that this discrepancy results from an artefact of the models’ initial geometry. Using axisymmetric finite element models, we demonstrated that pre-defining a field of soft tissue in which callus may develop introduces high deviatoric strains in the periosteal region adjacent to the fracture. These bone-inhibiting strains are not present when the initial soft tissue is confined to a thin periosteal layer. As observed in previous healing models, tissue differentiation algorithms regulated by deviatoric strain predicted hard callus forming remotely and growing towards the fracture. While dilatational strain regulation allowed early bone formation closer to the fracture, hard callus still formed initially over a broad area, rather than expanding over time. Modelling callus growth from a thin periosteal layer successfully predicted the initiation of hard callus growth close to the fracture site. However, these models were still susceptible to elevated deviatoric strains in the soft tissues at the edge of the hard callus. Our study highlights the importance of the initial soft tissue geometry used for finite element models of fracture healing. If this cannot be defined accurately, alternative mechanisms for the prediction of early callus development should be investigated.
Resumo:
Introduction Hydrogels prepared from star-shaped poly(ethylene glycol) (PEG) and maleimide-functionalized heparin provide a potential matrix for use in developing three dimensional (3D) models. We have previously demonstrated that these hydrogels support the cultivation of human umbilical vein endothelial cells (HUVECs). We extend this body of work to study the ability to create an extracellular matrix (ECM)-like model to study breast and prostate cancer cell growth in 3D. Also, we investigate the ability to produce a tri-culture mimicking tumour angiogenesis with cancer spheroids, HUVECs and mesenchymal stem cells (MSCs). Materials and Methods The breast cancer cell lines, MCF-7 and MDA-MB-231, and prostate cancer cell lines, LNCaP and PC3, were seeded into starPEG-heparin hydrogels and grown for 14 Days to analyze the effects of varying hydrogel stiffness on spheroid development. Resulting hydrogel constructs were analyzed via proliferation assays, light microscopy, and immunostaining. Cancer cell lines were then seeded into starPEG-heparin hydrogels functionalized with growth factors as spheroids with HUVECs and MSCs and grown as a tri-culture. Cultures were analyzed via immunostaining and observed using confocal microscopy. Results Cultures prepared in MMP-cleavable starPEG-heparin hydrogels display spheroid formation in contrast to adherent growth on tissue culture plastic. Small differences were visualized in cancer spheroid growth between different gel stiffness across the range of cell lines. Cancer cell lines were able to be co-cultivated with HUVECs and MSC. Interaction was visualized between tumours and HUVECs via confocal microscopy. Further studies intend to further optimize and mimic the ECM environment of in-situ tumour angiogenesis. Discussion Our results confirm the suitability of hydrogels constructed from starPEG-heparin for HUVEC and MSC co-cultivation with cancer cell lines to study cell-cell and cell-matrix interactions in a 3D environment. This represents a step forward in the development of 3D culture models to study the pathomechanisms of breast and prostate cancer.
Resumo:
This project investigated the calcium distributions of the skin, and the growth patterns of skin substitutes grown in the laboratory, using mathematical models. The research found that the calcium distribution in the upper layer of the skin is controlled by three different mechanisms, not one as previously thought. The research also suggests that tight junctions, which are adhesions between neighbouring skin cells, cannot be solely responsible for the differences in the growth patterns of skin substitutes and normal skin.
Resumo:
Background Food neophobia, the rejection of unknown or novel foods, may result in poor dietary patterns. This study investigates the cross-sectional relationship between neophobia in children aged 24 months and variety of fruit and vegetable consumption, intake of discretionary foods and weight. Methods Secondary analysis of data from 330 parents of children enrolled in the NOURISH RCT (control group only) and SAIDI studies was performed using data collected at child age 24 months. Neophobia was measured at 24 months using the Child Food Neophobia Scale (CFNS). The cross-sectional associations between total CFNS score and fruit and vegetable variety, discretionary food intake and BMI (Body Mass Index) Z-score were examined via multiple regression models; adjusting for significant covariates. Results At 24 months, more neophobic children were found to have lower variety of fruits (β=-0.16, p=0.003) and vegetables (β=-0.29, p<0.001) but have a greater proportion of daily energy from discretionary foods (β=0.11, p=0.04). There was no significant association between BMI Z-score and CFNS score. Conclusions Neophobia is associated with poorer dietary quality. Results highlight the need for interventions to (1) begin early to expose children to a wide variety of nutritious foods before neophobia peaks and (2) enable health professionals to educate parents on strategies to overcome neophobia.
Resumo:
1.Description of the Work The Fleet Store was devised as a creative output to establish an exhibition linked to a fashion business model where emerging designers were encouraged to research new and innovative strategies for creating design-driven and commercial collections for a public consumer. This was a project that was devised to break down the perceptions of emerging fashion designers that designing commercial collections linked to a sustainable business model is a boring and unnecessary process. The focus was to demystify the business of fashion and to link its importance to a design-driven and public outcome that is more familiar to fashion designers. The criterion for participation was that all designers had to be registered as a business with the Australian Taxation Office. Designers were chosen from the Creative Enterprise Australia Fashion Business Incubator, the QUT fashion graduate alumni and current QUT fashion design and double degree (fashion and business) students with existing businesses. The project evolved from a series of collaborative workshops where designers were introduced to new and innovative creative industries’ business models and the processes, costings and timings involved to create a niche, sustainable business for a public exhibition of design-driven commercial collections. All designers initiated their own business infra-structure but were then introduced to the concept of collaboration for successful and profitable exhibition and business outcomes. Collaborative strategies such as crowd funding, crowd sourcing, peer to peer mentoring and manufacturing were all researched, and strategies for the establishment of the retail exhibition were all devised in a collaborative environment. All participants also took on roles outside their ‘designer’ background to create a retail exhibition that was creative but also had critical mass and aesthetic for the consumer. The Fleet Store ‘popped up’ for 2 weeks (10 days), in a heritage-listed building in an inner city location. Passers-by were important, but the main consumer was enlisted by the use of interest and investment from crowd sourcing, crowd funding, ethical marketing, corporate social responsibility projects and collaborative public relations and social media strategies. The research has furthered discussion on innovative strategies for emerging fashion designers to initiate and maintain sustainable businesses and suggests that collaboration combined with a design-driven and business focus can create a sustainable and economically viable retail exhibition. 2. Research Statement Research Background The research field involved developing a new ethical, design-driven, collaborative and sustainable model for fashion design practice and management. The research asked can a public, design-driven, collaborative retail exhibition create a platform for promoting creative, innovative and sustainable business models for emerging fashion designers. The methodology was primarily practice-led as all participants were designers in their own right and the project manager acted as a mentor and curator to guide the process and analyse the potential of the research question. The Fleet Store offers new knowledge in design practice and management; with the creation of a model where design outcomes and business models are inextricably linked to the success of the creative output. Key innovations include extending the commercialisation of emerging fashion businesses by creating a curated retail gallery for collaborative and sustainable strategies to support niche fashion designer labels. This has contributed to a broader conversation on how to nurture and sustain competitive Australian fashion designers/labels. Research Contribution and Significance The Fleet Store has contributed to a growing body of research into innovative and sustainable business models for niche fashion and creative industries’ practitioners. All participants have maintained their business infra-structure and many are currently growing their businesses, using the strategies tested for the Fleet Store. The exhibition space was visited by over 1,000 people and sales of $27,000 were made in 10 days of opening. (Follow up sales of $3,000 has also been reported.) Three of the designers were ‘discovered’ from the exhibition and have received substantial orders from high profile national buyers and retailers for next season delivery. Several participants have since collaborated to create other pop up retail environments and are now mentoring other emerging designers on the significance of a collaborative retail exhibition to consolidate niche business models for emerging fashion designers.
Resumo:
The ultimate goal of profiling is to identify the major behavioral and personality characteristics to narrow the suspect pool. Inferences about offender characteristics can be accomplished deductively, based on the analysis of discrete offender behaviors established within a particular case. They can also be accomplished inductively, involving prediction based on abstract offender averages from group data (these methods and the logic on which they are based is detailed extensively in Chapters 2 and 4). As discussed, these two approaches are by no means equal.
Resumo:
Local spatio-temporal features with a Bag-of-visual words model is a popular approach used in human action recognition. Bag-of-features methods suffer from several challenges such as extracting appropriate appearance and motion features from videos, converting extracted features appropriate for classification and designing a suitable classification framework. In this paper we address the problem of efficiently representing the extracted features for classification to improve the overall performance. We introduce two generative supervised topic models, maximum entropy discrimination LDA (MedLDA) and class- specific simplex LDA (css-LDA), to encode the raw features suitable for discriminative SVM based classification. Unsupervised LDA models disconnect topic discovery from the classification task, hence yield poor results compared to the baseline Bag-of-words framework. On the other hand supervised LDA techniques learn the topic structure by considering the class labels and improve the recognition accuracy significantly. MedLDA maximizes likelihood and within class margins using max-margin techniques and yields a sparse highly discriminative topic structure; while in css-LDA separate class specific topics are learned instead of common set of topics across the entire dataset. In our representation first topics are learned and then each video is represented as a topic proportion vector, i.e. it can be comparable to a histogram of topics. Finally SVM classification is done on the learned topic proportion vector. We demonstrate the efficiency of the above two representation techniques through the experiments carried out in two popular datasets. Experimental results demonstrate significantly improved performance compared to the baseline Bag-of-features framework which uses kmeans to construct histogram of words from the feature vectors.
Resumo:
Railway capacity determination and expansion are very important topics. In prior research, the competition between different entities such as train services and train types, on different network corridors however have been ignored, poorly modelled, or else assumed to be static. In response, a comprehensive set of multi-objective models have been formulated in this article to perform a trade-off analysis. These models determine the total absolute capacity of railway networks as the most equitable solution according to a clearly defined set of competing objectives. The models also perform a sensitivity analysis of capacity with respect to those competing objectives. The models have been extensively tested on a case study and their significant worth is shown. The models were solved using a variety of techniques however an adaptive E constraint method was shown to be most superior. In order to identify only the best solution, a Simulated Annealing meta-heuristic was implemented and tested. However a linearization technique based upon separable programming was also developed and shown to be superior in terms of solution quality but far less in terms of computational time.
Resumo:
Traditional sensitivity and elasticity analyses of matrix population models have been used to inform management decisions, but they ignore the economic costs of manipulating vital rates. For example, the growth rate of a population is often most sensitive to changes in adult survival rate, but this does not mean that increasing that rate is the best option for managing the population because it may be much more expensive than other options. To explore how managers should optimize their manipulation of vital rates, we incorporated the cost of changing those rates into matrix population models. We derived analytic expressions for locations in parameter space where managers should shift between management of fecundity and survival, for the balance between fecundity and survival management at those boundaries, and for the allocation of management resources to sustain that optimal balance. For simple matrices, the optimal budget allocation can often be expressed as simple functions of vital rates and the relative costs of changing them. We applied our method to management of the Helmeted Honeyeater (Lichenostomus melanops cassidix; an endangered Australian bird) and the koala (Phascolarctos cinereus) as examples. Our method showed that cost-efficient management of the Helmeted Honeyeater should focus on increasing fecundity via nest protection, whereas optimal koala management should focus on manipulating both fecundity and survival simultaneously. These findings are contrary to the cost-negligent recommendations of elasticity analysis, which would suggest focusing on managing survival in both cases. A further investigation of Helmeted Honeyeater management options, based on an individual-based model incorporating density dependence, spatial structure, and environmental stochasticity, confirmed that fecundity management was the most cost-effective strategy. Our results demonstrate that decisions that ignore economic factors will reduce management efficiency. ©2006 Society for Conservation Biology.
Resumo:
The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. © 2010 Elsevier Ltd.
Resumo:
This thesis focused upon the development of improved capacity analysis and capacity planning techniques for railways. A number of innovations were made and were tested on a case study of a real national railway. These techniques can reduce the time required to perform decision making activities that planners and managers need to perform. As all railways need to be expanded to meet increasing demands, the presumption that analytical capacity models can be used to identify how best to improve an existing network at least cost, was fully investigated. Track duplication was the mechanism used to expanding a network's capacity, and two variant capacity expansion models were formulated. Another outcome of this thesis is the development and validation of bi objective models for capacity analysis. These models regulate the competition for track access and perform a trade-off analysis. An opportunity to develop more general mulch-objective approaches was identified.
Resumo:
Wound healing and tumour growth involve collective cell spreading, which is driven by individual motility and proliferation events within a population of cells. Mathematical models are often used to interpret experimental data and to estimate the parameters so that predictions can be made. Existing methods for parameter estimation typically assume that these parameters are constants and often ignore any uncertainty in the estimated values. We use approximate Bayesian computation (ABC) to estimate the cell diffusivity, D, and the cell proliferation rate, λ, from a discrete model of collective cell spreading, and we quantify the uncertainty associated with these estimates using Bayesian inference. We use a detailed experimental data set describing the collective cell spreading of 3T3 fibroblast cells. The ABC analysis is conducted for different combinations of initial cell densities and experimental times in two separate scenarios: (i) where collective cell spreading is driven by cell motility alone, and (ii) where collective cell spreading is driven by combined cell motility and cell proliferation. We find that D can be estimated precisely, with a small coefficient of variation (CV) of 2–6%. Our results indicate that D appears to depend on the experimental time, which is a feature that has been previously overlooked. Assuming that the values of D are the same in both experimental scenarios, we use the information about D from the first experimental scenario to obtain reasonably precise estimates of λ, with a CV between 4 and 12%. Our estimates of D and λ are consistent with previously reported values; however, our method is based on a straightforward measurement of the position of the leading edge whereas previous approaches have involved expensive cell counting techniques. Additional insights gained using a fully Bayesian approach justify the computational cost, especially since it allows us to accommodate information from different experiments in a principled way.
Resumo:
PURPOSE: This paper describes dynamic agent composition, used to support the development of flexible and extensible large-scale agent-based models (ABMs). This approach was motivated by a need to extend and modify, with ease, an ABM with an underlying networked structure as more information becomes available. Flexibility was also sought after so that simulations are set up with ease, without the need to program. METHODS: The dynamic agent composition approach consists in having agents, whose implementation has been broken into atomic units, come together at runtime to form the complex system representation on which simulations are run. These components capture information at a fine level of detail and provide a vast range of combinations and options for a modeller to create ABMs. RESULTS: A description of the dynamic agent composition is given in this paper, as well as details about its implementation within MODAM (MODular Agent-based Model), a software framework which is applied to the planning of the electricity distribution network. Illustrations of the implementation of the dynamic agent composition are consequently given for that domain throughout the paper. It is however expected that this approach will be beneficial to other problem domains, especially those with a networked structure, such as water or gas networks. CONCLUSIONS: Dynamic agent composition has many advantages over the way agent-based models are traditionally built for the users, the developers, as well as for agent-based modelling as a scientific approach. Developers can extend the model without the need to access or modify previously written code; they can develop groups of entities independently and add them to those already defined to extend the model. Users can mix-and-match already implemented components to form large-scales ABMs, allowing them to quickly setup simulations and easily compare scenarios without the need to program. The dynamic agent composition provides a natural simulation space over which ABMs of networked structures are represented, facilitating their implementation; and verification and validation of models is facilitated by quickly setting up alternative simulations.