609 resultados para term-structure


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Previous studies have shown that exercise (Ex) interventions create a stronger coupling between energy intake (EI) and energy expenditure (EE) leading to increased homeostasis of the energy-balance (EB) regulatory system compared to a diet intervention where an un-coupling between EI and EE occurs. The benefits of weight loss from Ex and diet interventions greatly depend on compensatory responses. The present study investigated an 8-week medium-term Ex and diet intervention program (Ex intervention comprised of 500kcal EE five days per week over four weeks at 65-75% maximal heart rate, whereas the diet intervention comprised of a 500kcal decrease in EI five days per week over four weeks) and its effects on compensatory responses and appetite regulation among healthy individuals using a between- and within-subjects design. Effects of an acute dietary manipulation on appetite and compensatory behaviours and whether a diet and/or Ex intervention pre-disposes individuals to disturbances in EB homeostasis were tested. Energy intake at an ad libitum lunch test meal after a breakfast high- and low-energy pre-load (the high energy pre-load contained 556kcal and the low energy pre-load contained 239kcal) were measured at the Baseline (Weeks -4 to 0) and Intervention (Weeks 0 to 4) phases in 13 healthy volunteers (three males and ten females; mean age 35 years [sd + 9] and mean BMI 25 kg/m2 [sd + 3.8]) [participants in each group included Ex=7, diet=5 (one female in the diet group dropped out midway), thus, 12 participants completed the study]. At Weeks -4, 0 and 4, visual analogue scales (VAS) were used to assess hunger and satiety and liking and wanting (L&W) for nutrient and taste preferences using a computer-based system (E-Prime v1.1.4). Ad libitum test meal EI was consistently lower after the HE pre-load compared to the LE pre-load. However, this was not consistent during the diet intervention however. A pre-load x group interaction on ad libitum test meal EI revealed that during the intervention phase the Ex group showed an improved sensitivity to detect the energy content between the two pre-loads and improved compensation for the ad libitum test meal whereas the diet group’s ability to differentiate between the two pre-loads decreased and showed poorer compensation (F[1,10]=2.88, p-value not significant). This study supports previous findings of the effect Ex and diet interventions have on appetite and compensatory responses; Ex increases and diet decreases energy balance sensitivity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports a 2-year longitudinal study on the effectiveness of the Pattern and Structure Mathematical Awareness Program (PASMAP) on students’ mathematical development. The study involved 316 Kindergarten students in 17 classes from four schools in Sydney and Brisbane. The development of the PASA assessment interview and scale are presented. The intervention program provided explicit instruction in mathematical pattern and structure that enhanced the development of students’ spatial structuring, multiplicative reasoning, and emergent generalisations. This paper presents the initial findings of the impact of the PASMAP and illustrates students’ structural development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Pattern and Structure Mathematical Awareness Program(PASMAP) stems from a 2-year longitudinal study on students’ early mathematical development. The paper outlines the interview assessment the Pattern and Structure Assessment(PASA) designed to describe students’ awareness of mathematical pattern and structure across a range of concepts. An overview of students’ performance across items and descriptions of their structural development are described.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent times considerable research attention has been directed to understanding dark networks, especially criminal and terrorist networks. Dark networks are those in which member motivations are self rather than public interested, achievements come at the cost of other individuals, groups or societies and, in addition, their activities are both ‘covert and illegal’ (Raab & Milward, 2003: 415). This ‘darkness’ has implications for the way in which these networks are structured, the strategies adopted and their recruitment methods. Such entities exhibit distinctive operating characteristics including most notably the tension between creating an efficient network structure while retaining the ability to hide from public view while avoiding catastrophic collapse should one member cooperate with authorities (Bouchard 2007). While theoretical emphasis has been on criminal and terrorist networks, recent work has demonstrated that corrupt police networks exhibit some distinctive characteristics. In particular, these entities operate within the shadows of a host organisation - the Police Force and distort the functioning of the ‘Thin Blue Line’ as the interface between the law abiding citizenry and the criminal society. Drawing on data derived from the Queensland Fitzgerald Commission of Enquiry into Police Misconduct and related documents, this paper examines the motivations, structural properties and operational practices of corrupt police networks and compares and contrasts these with other dark networks with ‘bright’ public service networks. The paper confirms the structural differences between dark corrupt police networks and bright networks and suggests. However, structural embeddedness alone is found to be an insufficient theoretical explanation for member involvement in networks and that a set of elements combine to impact decision-making. Although offering important insights into network participation, the paper’s findings are especially pertinent in identifying additional points of intervention for police corruption networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Collaboration has been enacted as a core strategy by both the government and nongovernment sectors to address many of the intractable issues confronting contemporary society. The cult of collaboration has become so pervasive that it is now an elastic term referring generally to any form of ‘working together’. The lack of specificity about collaboration and its practice means that it risks being reduced to mere rhetoric without sustained practice or action. Drawing on an extensive data set (qualitative, quantitative) of broadly collaborative endeavours gathered over ten years in Queensland, Australia, this paper aims to fill out the black box of collaboration. Specifically it examines the drivers for collaboration, dominant structures and mechanisms adopted, what has worked and unintended consequences. In particular it investigates the skills and competencies required in an embeded collaborative endeavour within and across organisations. Social network analysis is applied to isolate the structural properties of collaborations over other forms of integration as well as highlighting key roles and tasks. Collaboration is found to be a distinctive form of working together, characterised by intense and interdependent relationships and exchanges, higher levels of cohesion (density) and requiring new ways of behaving, working, managing and leading. These elements are configured into a practice framework. Developing an empirical evidence base for collaboration structure, practice and strategy provides a useful foundation for theory extension. The paper concludes that for collaboration, to be successfully employed as a management strategy it must move beyond rhetoric and develop a coherent model for action.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Our research explores the design of networked technologies to facilitate local suburban communications and to encourage people to engage with their local community. While there are many investigations of interaction designs for networked technologies, most research utilises small exercises, workshops or other short-term studies to investigate interaction designs. However, we have found these short-term methods to be ineffective in the context of understanding local community interaction. Moreover we find that people are resistant to putting their time into workshops and exercises, understandably so because these are academic practices, not local community practices. Our contribution is to detail a long term embedded design approach in which we interact with the community over the long term in the course of normal community goings-on with an evolving exploratory prototype. This paper discusses the embedded approach to working in the wild for extended field research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This chapter argues that evolutionary economics should be founded upon complex systems theory rather than neo-Darwinian analogies concerning natural selection, which focus on supply side considerations and competition amongst firms and technologies. It suggests that conceptions such as production and consumption functions should be replaced by network representations, in which the preferences or, more correctly, the aspirations of consumers are fundamental and, as such, the primary drivers of economic growth. Technological innovation is viewed as a process that is intermediate between these aspirational networks, and the organizational networks in which goods and services are produced. Consumer knowledge becomes at least as important as producer knowledge in determining how economic value is generated. It becomes clear that the stability afforded by connective systems of rules is essential for economic flexibility to exist, but that too many rules result in inert and structurally unstable states. In contrast, too few rules result in a more stable state, but at a low level of ordered complexity. Economic evolution from this perspective is explored using random and scale free network representations of complex systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Concrete is commonly used as a primary construction material for tall building construction. Load bearing components such as columns and walls in concrete buildings are subjected to instantaneous and long term axial shortening caused by the time dependent effects of "shrinkage", "creep" and "elastic" deformations. Reinforcing steel content, variable concrete modulus, volume to surface area ratio of the elements and environmental conditions govern axial shortening. The impact of differential axial shortening among columns and core shear walls escalate with increasing building height. Differential axial shortening of gravity loaded elements in geometrically complex and irregular buildings result in permanent distortion and deflection of the structural frame which have a significant impact on building envelopes, building services, secondary systems and the life time serviceability and performance of a building. Existing numerical methods commonly used in design to quantify axial shortening are mainly based on elastic analytical techniques and therefore unable to capture the complexity of non-linear time dependent effect. Ambient measurements of axial shortening using vibrating wire, external mechanical strain, and electronic strain gauges are methods that are available to verify pre-estimated values from the design stage. Installing these gauges permanently embedded in or on the surface of concrete components for continuous measurements during and after construction with adequate protection is uneconomical, inconvenient and unreliable. Therefore such methods are rarely if ever used in actual practice of building construction. This research project has developed a rigorous numerical procedure that encompasses linear and non-linear time dependent phenomena for prediction of axial shortening of reinforced concrete structural components at design stage. This procedure takes into consideration (i) construction sequence, (ii) time varying values of Young's Modulus of reinforced concrete and (iii) creep and shrinkage models that account for variability resulting from environmental effects. The capabilities of the procedure are illustrated through examples. In order to update previous predictions of axial shortening during the construction and service stages of the building, this research has also developed a vibration based procedure using ambient measurements. This procedure takes into consideration the changes in vibration characteristic of structure during and after construction. The application of this procedure is illustrated through numerical examples which also highlight the features. The vibration based procedure can also be used as a tool to assess structural health/performance of key structural components in the building during construction and service life.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent algorithms for monocular motion capture (MoCap) estimate weak-perspective camera matrices between images using a small subset of approximately-rigid points on the human body (i.e. the torso and hip). A problem with this approach, however, is that these points are often close to coplanar, causing canonical linear factorisation algorithms for rigid structure from motion (SFM) to become extremely sensitive to noise. In this paper, we propose an alternative solution to weak-perspective SFM based on a convex relaxation of graph rigidity. We demonstrate the success of our algorithm on both synthetic and real world data, allowing for much improved solutions to marker less MoCap problems on human bodies. Finally, we propose an approach to solve the two-fold ambiguity over bone direction using a k-nearest neighbour kernel density estimator.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the growing number of XML documents on theWeb it becomes essential to effectively organise these XML documents in order to retrieve useful information from them. A possible solution is to apply clustering on the XML documents to discover knowledge that promotes effective data management, information retrieval and query processing. However, many issues arise in discovering knowledge from these types of semi-structured documents due to their heterogeneity and structural irregularity. Most of the existing research on clustering techniques focuses only on one feature of the XML documents, this being either their structure or their content due to scalability and complexity problems. The knowledge gained in the form of clusters based on the structure or the content is not suitable for reallife datasets. It therefore becomes essential to include both the structure and content of XML documents in order to improve the accuracy and meaning of the clustering solution. However, the inclusion of both these kinds of information in the clustering process results in a huge overhead for the underlying clustering algorithm because of the high dimensionality of the data. The overall objective of this thesis is to address these issues by: (1) proposing methods to utilise frequent pattern mining techniques to reduce the dimension; (2) developing models to effectively combine the structure and content of XML documents; and (3) utilising the proposed models in clustering. This research first determines the structural similarity in the form of frequent subtrees and then uses these frequent subtrees to represent the constrained content of the XML documents in order to determine the content similarity. A clustering framework with two types of models, implicit and explicit, is developed. The implicit model uses a Vector Space Model (VSM) to combine the structure and the content information. The explicit model uses a higher order model, namely a 3- order Tensor Space Model (TSM), to explicitly combine the structure and the content information. This thesis also proposes a novel incremental technique to decompose largesized tensor models to utilise the decomposed solution for clustering the XML documents. The proposed framework and its components were extensively evaluated on several real-life datasets exhibiting extreme characteristics to understand the usefulness of the proposed framework in real-life situations. Additionally, this research evaluates the outcome of the clustering process on the collection selection problem in the information retrieval on the Wikipedia dataset. The experimental results demonstrate that the proposed frequent pattern mining and clustering methods outperform the related state-of-the-art approaches. In particular, the proposed framework of utilising frequent structures for constraining the content shows an improvement in accuracy over content-only and structure-only clustering results. The scalability evaluation experiments conducted on large scaled datasets clearly show the strengths of the proposed methods over state-of-the-art methods. In particular, this thesis work contributes to effectively combining the structure and the content of XML documents for clustering, in order to improve the accuracy of the clustering solution. In addition, it also contributes by addressing the research gaps in frequent pattern mining to generate efficient and concise frequent subtrees with various node relationships that could be used in clustering.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Twin studies offer the opportunity to determine the relative contribution of genes versus environment in traits of interest. Here, we investigate the extent to which variance in brain structure is reduced in monozygous twins with identical genetic make-up. We investigate whether using twins as compared to a control population reduces variability in a number of common magnetic resonance (MR) structural measures, and we investigate the location of areas under major genetic influences. This is fundamental to understanding the benefit of using twins in studies where structure is the phenotype of interest. Twenty-three pairs of healthy MZ twins were compared to matched control pairs. Volume, T2 and diffusion MR imaging were performed as well as spectroscopy (MRS). Images were compared using (i) global measures of standard deviation and effect size, (ii) voxel-based analysis of similarity and (iii) intra-pair correlation. Global measures indicated a consistent increase in structural similarity in twins. The voxel-based and correlation analyses indicated a widespread pattern of increased similarity in twin pairs, particularly in frontal and temporal regions. The areas of increased similarity were most widespread for the diffusion trace and least widespread for T2. MRS showed consistent reduction in metabolite variation that was significant in the temporal lobe N-acetylaspartate (NAA). This study has shown the distribution and magnitude of reduced variability in brain volume, diffusion, T2 and metabolites in twins. The data suggest that evaluation of twins discordant for disease is indeed a valid way to attribute genetic or environmental influences to observed abnormalities in patients since evidence is provided for the underlying assumption of decreased variability in twins.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A simple phenomenological model for the relationship between structure and composition of the high Tc cuprates is presented. The model is based on two simple crystal chemistry principles: unit cell doping and charge balance within unit cells. These principles are inspired by key experimental observations of how the materials accommodate large deviations from stoichiometry. Consistent explanations for significant HTSC properties can be explained without any additional assumptions while retaining valuable insight for geometric interpretation. Combining these two chemical principles with a review of Crystal Field Theory (CFT) or Ligand Field Theory (LFT), it becomes clear that the two oxidation states in the conduction planes (typically d8 and d9) belong to the most strongly divergent d-levels as a function of deformation from regular octahedral coordination. This observation offers a link to a range of coupling effects relating vibrations and spin waves through application of Hund’s rules. An indication of this model’s capacity to predict physical properties for HTSC is provided and will be elaborated in subsequent publications. Simple criteria for the relationship between structure and composition in HTSC systems may guide chemical syntheses within new material systems.