672 resultados para task model


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the complexities that are involved in the genetics of multifactorial diseases is still a monumental task. In addition to environmental factors that can influence the risk of disease, there is also a number of other complicating factors. Genetic variants associated with age of disease onset may be different from those variants associated with overall risk of disease, and variants may be located in positions that are not consistent with the traditional protein coding genetic paradigm. Latent Variable Models are well suited for the analysis of genetic data. A latent variable is one that we do not directly observe, but which is believed to exist or is included for computational or analytic convenience in a model. This thesis presents a mixture of methodological developments utilising latent variables, and results from case studies in genetic epidemiology and comparative genomics. Epidemiological studies have identified a number of environmental risk factors for appendicitis, but the disease aetiology of this oft thought useless vestige remains largely a mystery. The effects of smoking on other gastrointestinal disorders are well documented, and in light of this, the thesis investigates the association between smoking and appendicitis through the use of latent variables. By utilising data from a large Australian twin study questionnaire as both cohort and case-control, evidence is found for the association between tobacco smoking and appendicitis. Twin and family studies have also found evidence for the role of heredity in the risk of appendicitis. Results from previous studies are extended here to estimate the heritability of age-at-onset and account for the eect of smoking. This thesis presents a novel approach for performing a genome-wide variance components linkage analysis on transformed residuals from a Cox regression. This method finds evidence for a dierent subset of genes responsible for variation in age at onset than those associated with overall risk of appendicitis. Motivated by increasing evidence of functional activity in regions of the genome once thought of as evolutionary graveyards, this thesis develops a generalisation to the Bayesian multiple changepoint model on aligned DNA sequences for more than two species. This sensitive technique is applied to evaluating the distributions of evolutionary rates, with the finding that they are much more complex than previously apparent. We show strong evidence for at least 9 well-resolved evolutionary rate classes in an alignment of four Drosophila species and at least 7 classes in an alignment of four mammals, including human. A pattern of enrichment and depletion of genic regions in the profiled segments suggests they are functionally significant, and most likely consist of various functional classes. Furthermore, a method of incorporating alignment characteristics representative of function such as GC content and type of mutation into the segmentation model is developed within this thesis. Evidence of fine-structured segmental variation is presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Specialised support for student nurses making the transition to graduate nurse can be crucial to successful and smooth adjustment, and can create a path to positive and stable career experiences. This paper describes an enhanced model of final year nursing student placements which was trialled in 2006 at the Queensland University of Technology. The model involved collaboration with two major urban health services and resources were developed to support effective transition experiences. Ninety-two students, including 29 trial participants and 63 non-trial participants were assessed on preparedness for professional practice, before and after the trial semester. Results indicated an increase in preparedness across the entire sample, but students participating in the trial did not differ significantly in overall preparedness change from those who did not participate. Higher baseline preparedness in the trial group highlighted the possibility that proactive students who choose enrichment experiences tend to be likelier to gain benefit from such options than those who do not. Qualitative findings from focus groups conducted with 12 transition group students highlighted that one of the main beneficial aspects of the experience for students was the sense of belonging to a team that understood their learning needs and could work constructively with them.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automatic recognition of people is an active field of research with important forensic and security applications. In these applications, it is not always possible for the subject to be in close proximity to the system. Voice represents a human behavioural trait which can be used to recognise people in such situations. Automatic Speaker Verification (ASV) is the process of verifying a persons identity through the analysis of their speech and enables recognition of a subject at a distance over a telephone channel { wired or wireless. A significant amount of research has focussed on the application of Gaussian mixture model (GMM) techniques to speaker verification systems providing state-of-the-art performance. GMM's are a type of generative classifier trained to model the probability distribution of the features used to represent a speaker. Recently introduced to the field of ASV research is the support vector machine (SVM). An SVM is a discriminative classifier requiring examples from both positive and negative classes to train a speaker model. The SVM is based on margin maximisation whereby a hyperplane attempts to separate classes in a high dimensional space. SVMs applied to the task of speaker verification have shown high potential, particularly when used to complement current GMM-based techniques in hybrid systems. This work aims to improve the performance of ASV systems using novel and innovative SVM-based techniques. Research was divided into three main themes: session variability compensation for SVMs; unsupervised model adaptation; and impostor dataset selection. The first theme investigated the differences between the GMM and SVM domains for the modelling of session variability | an aspect crucial for robust speaker verification. Techniques developed to improve the robustness of GMMbased classification were shown to bring about similar benefits to discriminative SVM classification through their integration in the hybrid GMM mean supervector SVM classifier. Further, the domains for the modelling of session variation were contrasted to find a number of common factors, however, the SVM-domain consistently provided marginally better session variation compensation. Minimal complementary information was found between the techniques due to the similarities in how they achieved their objectives. The second theme saw the proposal of a novel model for the purpose of session variation compensation in ASV systems. Continuous progressive model adaptation attempts to improve speaker models by retraining them after exploiting all encountered test utterances during normal use of the system. The introduction of the weight-based factor analysis model provided significant performance improvements of over 60% in an unsupervised scenario. SVM-based classification was then integrated into the progressive system providing further benefits in performance over the GMM counterpart. Analysis demonstrated that SVMs also hold several beneficial characteristics to the task of unsupervised model adaptation prompting further research in the area. In pursuing the final theme, an innovative background dataset selection technique was developed. This technique selects the most appropriate subset of examples from a large and diverse set of candidate impostor observations for use as the SVM background by exploiting the SVM training process. This selection was performed on a per-observation basis so as to overcome the shortcoming of the traditional heuristic-based approach to dataset selection. Results demonstrate the approach to provide performance improvements over both the use of the complete candidate dataset and the best heuristically-selected dataset whilst being only a fraction of the size. The refined dataset was also shown to generalise well to unseen corpora and be highly applicable to the selection of impostor cohorts required in alternate techniques for speaker verification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ultraviolet radiation (UV) is the carcinogen that causes the most common malignancy in humans – skin cancer. However, moderate UV exposure is essential for producing vitaminDin our skin. VitaminDincreases the absorption of calcium from the diet, and adequate calcium is necessary for the building and maintenance of bones. Thus, low levels of vitamin D can cause osteomalacia and rickets and contribute to osteoporosis. Emerging evidence also suggests vitamin D may protect against falls, internal cancers, psychiatric conditions, autoimmune diseases and cardiovascular diseases. Since the dominant source of vitamin D is sunlight exposure, there is a need to understand what is a “balanced” level of sun exposure to maintain an adequate level of vitamin D but minimise the risks of eye damage, skin damage and skin cancer resulting from excessive UV exposure. There are many steps in the pathway from incoming solar UV to the eventual vitamin D status of humans (measured as 25-hydroxyvitamin D in the blood), and our knowledge about many of these steps is currently incomplete. This project begins by investigating the levels of UV available for synthesising vitamin D, and how these levels vary across seasons, latitudes and times of the day. The thesis then covers experiments conducted with an in vitro model, which was developed to study several aspects of vitamin D synthesis. Results from the model suggest the relationship between UV dose and vitamin D is not linear. This is an important input into public health messages regarding ‘safe’ UV exposure: larger doses of UV, beyond a certain limit, may not continue to produce vitamin D; however, they will increase the risk of skin cancers and eye damage. The model also showed that, when given identical doses of UV, the amount of vitamin D produced was impacted by temperature. In humans, a temperature-dependent reaction must occur in the top layers of human skin, prior to vitamin D entering the bloodstream. The hypothesis will be raised that cooler temperatures (occurring in winter and at high latitudes) may reduce vitamin D production in humans. Finally, the model has also been used to study the wavelengths of UV thought to be responsible for producing vitamin D. It appears that vitamin D production is limited to a small range of UV wavelengths, which may be narrower than previously thought. Together, these results suggest that further research is needed into the ability of humans to synthesise vitamin D from sunlight. In particular, more information is needed about the dose-response relationship in humans and to investigate the proposed impact of temperature. Having an accurate action spectrum will also be essential for measuring the available levels of vitamin D-effective UV. As this research continues, it will contribute to the scientific evidence-base needed for devising a public health message that will balance the risks of excessive UV exposure with maintaining adequate vitamin D.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the quest for shorter time-to-market, higher quality and reduced cost, model-driven software development has emerged as a promising approach to software engineering. The central idea is to promote models to first-class citizens in the development process. Starting from a set of very abstract models in the early stage of the development, they are refined into more concrete models and finally, as a last step, into code. As early phases of development focus on different concepts compared to later stages, various modelling languages are employed to most accurately capture the concepts and relations under discussion. In light of this refinement process, translating between modelling languages becomes a time-consuming and error-prone necessity. This is remedied by model transformations providing support for reusing and automating recurring translation efforts. These transformations typically can only be used to translate a source model into a target model, but not vice versa. This poses a problem if the target model is subject to change. In this case the models get out of sync and therefore do not constitute a coherent description of the software system anymore, leading to erroneous results in later stages. This is a serious threat to the promised benefits of quality, cost-saving, and time-to-market. Therefore, providing a means to restore synchronisation after changes to models is crucial if the model-driven vision is to be realised. This process of reflecting changes made to a target model back to the source model is commonly known as Round-Trip Engineering (RTE). While there are a number of approaches to this problem, they impose restrictions on the nature of the model transformation. Typically, in order for a transformation to be reversed, for every change to the target model there must be exactly one change to the source model. While this makes synchronisation relatively “easy”, it is ill-suited for many practically relevant transformations as they do not have this one-to-one character. To overcome these issues and to provide a more general approach to RTE, this thesis puts forward an approach in two stages. First, a formal understanding of model synchronisation on the basis of non-injective transformations (where a number of different source models can correspond to the same target model) is established. Second, detailed techniques are devised that allow the implementation of this understanding of synchronisation. A formal underpinning for these techniques is drawn from abductive logic reasoning, which allows the inference of explanations from an observation in the context of a background theory. As non-injective transformations are the subject of this research, there might be a number of changes to the source model that all equally reflect a certain target model change. To help guide the procedure in finding “good” source changes, model metrics and heuristics are investigated. Combining abductive reasoning with best-first search and a “suitable” heuristic enables efficient computation of a number of “good” source changes. With this procedure Round-Trip Engineering of non-injective transformations can be supported.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis we are interested in financial risk and the instrument we want to use is Value-at-Risk (VaR). VaR is the maximum loss over a given period of time at a given confidence level. Many definitions of VaR exist and some will be introduced throughout this thesis. There two main ways to measure risk and VaR: through volatility and through percentiles. Large volatility in financial returns implies greater probability of large losses, but also larger probability of large profits. Percentiles describe tail behaviour. The estimation of VaR is a complex task. It is important to know the main characteristics of financial data to choose the best model. The existing literature is very wide, maybe controversial, but helpful in drawing a picture of the problem. It is commonly recognised that financial data are characterised by heavy tails, time-varying volatility, asymmetric response to bad and good news, and skewness. Ignoring any of these features can lead to underestimating VaR with a possible ultimate consequence being the default of the protagonist (firm, bank or investor). In recent years, skewness has attracted special attention. An open problem is the detection and modelling of time-varying skewness. Is skewness constant or there is some significant variability which in turn can affect the estimation of VaR? This thesis aims to answer this question and to open the way to a new approach to model simultaneously time-varying volatility (conditional variance) and skewness. The new tools are modifications of the Generalised Lambda Distributions (GLDs). They are four-parameter distributions, which allow the first four moments to be modelled nearly independently: in particular we are interested in what we will call para-moments, i.e., mean, variance, skewness and kurtosis. The GLDs will be used in two different ways. Firstly, semi-parametrically, we consider a moving window to estimate the parameters and calculate the percentiles of the GLDs. Secondly, parametrically, we attempt to extend the GLDs to include time-varying dependence in the parameters. We used the local linear regression to estimate semi-parametrically conditional mean and conditional variance. The method is not efficient enough to capture all the dependence structure in the three indices —ASX 200, S&P 500 and FT 30—, however it provides an idea of the DGP underlying the process and helps choosing a good technique to model the data. We find that GLDs suggest that moments up to the fourth order do not always exist, there existence appears to vary over time. This is a very important finding, considering that past papers (see for example Bali et al., 2008; Hashmi and Tay, 2007; Lanne and Pentti, 2007) modelled time-varying skewness, implicitly assuming the existence of the third moment. However, the GLDs suggest that mean, variance, skewness and in general the conditional distribution vary over time, as already suggested by the existing literature. The GLDs give good results in estimating VaR on three real indices, ASX 200, S&P 500 and FT 30, with results very similar to the results provided by historical simulation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Process modeling is a complex organizational task that requires many iterations and communication between the business analysts and the domain specialists involved in the process modeling. The challenge of process modeling is exacerbated, when the process of modeling has to be performed in a cross-organizational, distributed environment. Some systems have been developed to support collaborative process modeling, all of which use traditional 2D interfaces. We present an environment for collaborative process modeling, using 3D virtual environment technology. We make use of avatar instantiations of user ego centres, to allow for the spatial embodiment of the user with reference to the process model. We describe an innovative prototype collaborative process modeling approach, implemented as a modeling environment in Second Life. This approach leverages the use of virtual environments to provide user context for editing and collaborative exercises. We present a positive preliminary report on a case study, in which a test group modelled a business process using the system in Second Life.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper explores the philosophical roots of appropriation within Marx's theories and socio-cultural studies in an attempt to seek common ground among existing theories of technology appropriation in IS research. Drawing on appropriation perspectives from Adaptive Structuration Theory, the Model of Technology Appropriation and the Structurational Model of Technology for comparison, we aim to generate a Marxian model that provides a starting point toward a general causal model of technology appropriation. This paper opens a philosophical discussion on the phenomenon of appropriation in the IS community, directing attention to foundational concepts in the human-technology nexus using ideas conceived by Marx.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Process models are used by information professionals to convey semantics about the business operations in a real world domain intended to be supported by an information system. The understandability of these models is vital to them actually being used. After all, what is not understood cannot be acted upon. Yet until now, understandability has primarily been defined as an intrinsic quality of the models themselves. Moreover, those studies that looked at understandability from a user perspective have mainly conceptualized users through rather arbitrary sets of variables. In this paper we advance an integrative framework to understand the role of the user in the process of understanding process models. Building on cognitive psychology, goal-setting theory and multimedia learning theory, we identify three stages of learning required to realize model understanding, these being Presage, Process, and Product. We define eight relevant user characteristics in the Presage stage of learning, three knowledge construction variables in the Process stage and three potential learning outcomes in the Product stage. To illustrate the benefits of the framework, we review existing process modeling work to identify where our framework can complement and extend existing studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose a multivariate GARCH model with a time-varying conditional correlation structure. The new double smooth transition conditional correlation (DSTCC) GARCH model extends the smooth transition conditional correlation (STCC) GARCH model of Silvennoinen and Teräsvirta (2005) by including another variable according to which the correlations change smoothly between states of constant correlations. A Lagrange multiplier test is derived to test the constancy of correlations against the DSTCC-GARCH model, and another one to test for another transition in the STCC-GARCH framework. In addition, other specification tests, with the aim of aiding the model building procedure, are considered. Analytical expressions for the test statistics and the required derivatives are provided. Applying the model to the stock and bond futures data, we discover that the correlation pattern between them has dramatically changed around the turn of the century. The model is also applied to a selection of world stock indices, and we find evidence for an increasing degree of integration in the capital markets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several brain imaging studies have assumed that response conflict is present in Stroop tasks. However, this has not been demonstrated directly. We examined the time-course of stimulus and response conflict resolution in a numerical Stroop task by combining single-trial electro-myography (EMG) and event-related brain potentials (ERP). EMG enabled the direct tracking of response conflict and the peak latency of the P300 ERP wave was used to index stimulus conflict. In correctly responded trials of the incongruent condition EMG detected robust incorrect response hand activation which appeared consistently in single trials. In 50–80% of the trials correct and incorrect response hand activation coincided temporally, while in 20–50% of the trials incorrect hand activation preceded correct hand activation. EMG data provides robust direct evidence for response conflict. However, congruency effects also appeared in the peak latency of the P300 wave which suggests that stimulus conflict also played a role in the Stroop paradigm. Findings are explained by the continuous flow model of information processing: Partially processed task-irrelevant stimulus information can result in stimulus conflict and can prepare incorrect response activity. A robust congruency effect appeared in the amplitude of incongruent vs. congruent ERPs between 330–400 ms, this effect may be related to the activity of the anterior cingulate cortex.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper explores the potential therapeutic role of the naturally occurring sugar heparan sulfate (HS) for the augmentation of bone repair. Scaffolds comprising fibrin glue loaded with 5 lg of embryonically derived HS were assessed, firstly as a release-reservoir, and secondly as a scaffold to stimulate bone regeneration in a critical size rat cranial defect. We show HS-loaded scaffolds have a uniform distribution of HS, which was readily released with a typical burst phase, quickly followed by a prolonged delivery lasting several days. Importantly, the released HS contributed to improved wound healing over a 3-month period as determined by microcomputed tomography (lCT) scanning, histology, histomorphometry, and PCR for osteogenic markers. In all cases, only minimal healing was observed after 1 and 3 months in the absence of HS. In contrast, marked healing was observed by 3 months following HS treatment, with nearly full closure of the defect site. PCR analysis showed significant increases in the gene expression of the osteogenic markers Runx2, alkaline phosphatase, and osteopontin in the heparin sulfate group compared with controls. These results further emphasize the important role HS plays in augmenting wound healing, and its successful delivery in a hydrogel provides a novel alternative to autologous bone graft and growth factorbased therapies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As a result of the growing adoption of Business Process Management (BPM) technology different stakeholders need to understand and agree upon the process models that are used to configure BPM systems. However, BPM users have problems dealing with the complexity of such models. Therefore, the challenge is to improve the comprehension of process models. While a substantial amount of literature is devoted to this topic, there is no overview of the various mechanisms that exist to deal with managing complexity in (large) process models. It is thus hard to obtain comparative insight into the degree of support offered for various complexity reducing mechanisms by state-of-the-art languages and tools. This paper focuses on complexity reduction mechanisms that affect the abstract syntax of a process model, i.e. the structure of a process model. These mechanisms are captured as patterns, so that they can be described in their most general form and in a language- and tool-independent manner. The paper concludes with a comparative overview of the degree of support for these patterns offered by state-of-the-art languages and language implementations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With increasing pressure to provide environmentally responsible infrastructure products and services, stakeholders are putting significant foci on the early identification of financial viability and outcome of infrastructure projects. Traditionally, there has been an imbalance between sustainable measures and project budget. On one hand, the industry tends to employ the first-cost mentality and approach to developing infrastructure projects. On the other, environmental experts and technology innovators often push for the ultimately green products and systems without much of a concern for cost. This situation is being quickly changed as the industry is under pressure to continue to return profit, while better adapting to current and emerging global issues of sustainability. For the infrastructure sector to contribute to sustainable development, it will need to increase value and efficiency. Thus, there is a great need for tools that will enable decision makers evaluate competing initiatives and identify the most sustainable approaches to procuring infrastructure projects. In order to ensure that these objectives are achieved, the concept of life-cycle costing analysis (LCCA) will play significant roles in the economics of an infrastructure project. Recently, a few research initiatives have applied the LCCA models for road infrastructure that focused on the traditional economics of a project. There is little coverage of life-cycle costing as a method to evaluate the criteria and assess the economic implications of pursuing sustainability in road infrastructure projects. To rectify this problem, this paper reviews the theoretical basis of previous LCCA models before discussing their inability to determinate the sustainability indicators in road infrastructure project. It then introduces an on-going research aimed at developing a new model to integrate the various new cost elements based on the sustainability indicators with the traditional and proven LCCA approach. It is expected that the research will generate a working model for sustainability based life-cycle cost analysis.