210 resultados para Latent class model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

SAP and its research partners have been developing a lan- guage for describing details of Services from various view- points called the Unified Service Description Language (USDL). At the time of writing, version 3.0 describes technical implementation aspects of services, as well as stakeholders, pricing, lifecycle, and availability. Work is also underway to address other business and legal aspects of services. This language is designed to be used in service portfolio management, with a repository of service descriptions being available to various stakeholders in an organisation to allow for service prioritisation, development, deployment and lifecycle management. The structure of the USDL metadata is specified using an object-oriented metamodel that conforms to UML, MOF and EMF Ecore. As such it is amenable to code gener-ation for implementations of repositories that store service description instances. Although Web services toolkits can be used to make these programming language objects available as a set of Web services, the practicalities of writing dis- tributed clients against over one hundred class definitions, containing several hundred attributes, will make for very large WSDL interfaces and highly inefficient “chatty” implementations. This paper gives the high-level design for a completely model-generated repository for any version of USDL (or any other data-only metamodel), which uses the Eclipse Modelling Framework’s Java code generation, along with several open source plugins to create a robust, transactional repository running in a Java application with a relational datastore. However, the repository exposes a generated WSDL interface at a coarse granularity, suitable for distributed client code and user-interface creation. It uses heuristics to drive code generation to bridge between the Web service and EMF granularities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a study on estimating the latent demand for rail transit in Australian context. Based on travel mode-choice modelling, a two-stage analysis approach is proposed, namely market population identification and mode share estimation. A case study is conducted on Midland-Fremantle rail transit corridor in Perth, Western Australia. The required data mainly include journey-to-work trip data from Australian Bureau of Statistics Census 2006 and work-purpose mode-choice model in Perth Strategic Transport Evaluation Model. The market profile is analysed, such as catchment areas, market population, mode shares, mode specific trip distributions and average trip distances. A numerical simulation is performed to test the sensitivity of the transit ridership to the change of fuel price. A corridor-level transit demand function of fuel price is thus obtained and its characteristics of elasticity are discussed. This study explores a viable approach to developing a decision-support tool for the assessment of short-term impacts of policy and operational adjustments on corridor-level demand for rail transit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigates a way to systematically integrate information literacy (IL) into an undergraduate academic programme and develops a model for integrating information literacy across higher education curricula. Curricular integration of information literacy in this study means weaving information literacy into an academic curriculum. In the associated literature, it is also referred to as the information literacy embedding approach or the intra-curricular approach. The key findings identified from this study are presented in 4 categories: the characteristics of IL integration; the key stakeholders in IL integration; IL curricular design strategies; and the process of IL curricular integration. Three key characteristics of the curricular integration of IL are identified: collaboration and negotiation, contextualisation and ongoing interaction with information. The key stakeholders in the curricular integration of IL are recognised as the librarians, the course coordinators and lecturers, the heads of faculties or departments, and the students. Some strategies for IL curricular design include: the use of IL policies and standards in IL curricular design; the combination of face to face and online teaching as an emerging trend; the use of IL assessment tools which play an important role in IL integration. IL can be integrated into the intended curriculum (what an institution expects its students to learn), the offered curriculum (what the teachers teach) and the received curriculum (what students actually learn). IL integration is a process of negotiation, collaboration and the implementation of the intended curriculum. IL can be integrated at different levels of curricula such as: institutional, faculty, departmental, course and class curriculum levels. Based on these key findings, an IL curricular integration model is developed. The model integrates curriculum, pedagogy and learning theories, IL theories, IL guidelines and the collaboration of multiple partners. The model provides a practical approach to integrating IL into multiple courses across an academic degree. The development of the model was based on the IL integration experiences of various disciplines in three universities and the implementation experience of an engineering programme at another university; thus it may be of interest to other disciplines. The model has the potential to enhance IL teaching and learning, curricular development and to implement graduate attributes in higher education. Sociocultural theories are applied to the research process and IL curricular design of this study. Sociocultural theories describe learning as being embedded within social events and occurring as learners interact with other people, objects, and events in a collaborative environment. Sociocultural theories are applied to explore how academic staff and librarians experience the curricular integration of IL; they also support collaboration in the curricular integration of IL and the development of an IL integration model. This study consists of two phases. Phase I (2007) was the interview phase where both academic staff and librarians at three IL active universities were interviewed. During this phase, attention was paid specifically to the practical process of curricular integration of IL and IL activity design. Phase II, the development phase (2007-2008), was conducted at a fourth university. This phase explores the systematic integration of IL into an engineering degree from Year 1 to Year 4. Learning theories such as sociocultural theories, Bloom’s Taxonomy and IL theories are used in IL curricular development. Based on the findings from both phases, an IL integration model was developed. The findings and the model contribute to IL education, research and curricular development in higher education. The sociocultural approach adopted in this study also extends the application of sociocultural theories to the IL integration process and curricular design in higher education.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A model to predict the buildup of mainly traffic-generated volatile organic compounds or VOCs (toluene, ethylbenzene, ortho-xylene, meta-xylene, and para-xylene) on urban road surfaces is presented. The model required three traffic parameters, namely average daily traffic (ADT), volume to capacity ratio (V/C), and surface texture depth (STD), and two chemical parameters, namely total suspended solid (TSS) and total organic carbon (TOC), as predictor variables. Principal component analysis and two phase factor analysis were performed to characterize the model calibration parameters. Traffic congestion was found to be the underlying cause of traffic-related VOC buildup on urban roads. The model calibration was optimized using orthogonal experimental design. Partial least squares regression was used for model prediction. It was found that a better optimized orthogonal design could be achieved by including the latent factors of the data matrix into the design. The model performed fairly accurately for three different land uses as well as five different particle size fractions. The relative prediction errors were 10–40% for the different size fractions and 28–40% for the different land uses while the coefficients of variation of the predicted intersite VOC concentrations were in the range of 25–45% for the different size fractions. Considering the sizes of the data matrices, these coefficients of variation were within the acceptable interlaboratory range for analytes at ppb concentration levels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Asset health inspections can produce two types of indicators: (1) direct indicators (e.g. the thickness of a brake pad, and the crack depth on a gear) which directly relate to a failure mechanism; and (2) indirect indicators (e.g. the indicators extracted from vibration signals and oil analysis data) which can only partially reveal a failure mechanism. While direct indicators enable more precise references to asset health condition, they are often more difficult to obtain than indirect indicators. The state space model provides an efficient approach to estimating direct indicators by using indirect indicators. However, existing state space models to estimate direct indicators largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires fixed inspection intervals. The discrete state assumption entails discretising continuous degradation indicators, which often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This paper proposes a state space model without these assumptions. Monte Carlo-based algorithms are developed to estimate the model parameters and the remaining useful life. These algorithms are evaluated for performance using numerical simulations through MATLAB. The result shows that both the parameters and the remaining useful life are estimated accurately. Finally, the new state space model is used to process vibration and crack depth data from an accelerated test of a gearbox. During this application, the new state space model shows a better fitness result than the state space model with linear and Gaussian assumption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability to detect unusual events in surviellance footage as they happen is a highly desireable feature for a surveillance system. However, this problem remains challenging in crowded scenes due to occlusions and the clustering of people. In this paper, we propose using the Distributed Behavior Model (DBM), which has been widely used in computer graphics, for video event detection. Our approach does not rely on object tracking, and is robust to camera movements. We use sparse coding for classification, and test our approach on various datasets. Our proposed approach outperforms a state-of-the-art work which uses the social force model and Latent Dirichlet Allocation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The health effects of cold and hot temperatures are strongest in the frail and elderly. A large number of deaths in this "susceptible pool" after heat waves and cold snaps can cause mortality displacement, where an immediate increase in mortality is somewhat offset by a subsequent decrease in the following weeks. There may also be longer-term implications, as reductions in the pool caused by hot summers can reduce cold-related mortality in the following winter. A state-space model was used to simulate the numbers in the susceptible pool over time. We simulated the effects of harsh winters and heat waves, and varied the size of the susceptible pool. The larger the susceptible pool the smaller the mortality displacement. When 1% of the population were susceptible a harsh winter lead to an average of just 3 months of life lost per cold-related death, whereas a pool size of 10% meant that 24 months of life were lost per death. The impact of a cold spell on months of life lost was greater when the increased risk of death also applied to healthy people. The number of deaths caused by an August heat wave were reduced when there was a prior heat wave in June which reduced the susceptible pool. We were able to mimic some observed seasonal patterns in mortality using a simple state-space model. A better understanding of the size and dynamics of the susceptible pool will improve our understanding of the health effects of temperature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a class of fractional advection–dispersion models (FADMs) is considered. These models include five fractional advection–dispersion models, i.e., the time FADM, the mobile/immobile time FADM with a time Caputo fractional derivative 0 < γ < 1, the space FADM with two sides Riemann–Liouville derivatives, the time–space FADM and the time fractional advection–diffusion-wave model with damping with index 1 < γ < 2. These equations can be used to simulate the regional-scale anomalous dispersion with heavy tails. We propose computationally effective implicit numerical methods for these FADMs. The stability and convergence of the implicit numerical methods are analysed and compared systematically. Finally, some results are given to demonstrate the effectiveness of theoretical analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a class of fractional advection-dispersion models (FADM) is investigated. These models include five fractional advection-dispersion models: the immobile, mobile/immobile time FADM with a temporal fractional derivative 0 < γ < 1, the space FADM with skewness, both the time and space FADM and the time fractional advection-diffusion-wave model with damping with index 1 < γ < 2. They describe nonlocal dependence on either time or space, or both, to explain the development of anomalous dispersion. These equations can be used to simulate regional-scale anomalous dispersion with heavy tails, for example, the solute transport in watershed catchments and rivers. We propose computationally effective implicit numerical methods for these FADM. The stability and convergence of the implicit numerical methods are analyzed and compared systematically. Finally, some results are given to demonstrate the effectiveness of our theoretical analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This month, Jan Recker turns his attention to the technological side of BPM research and education. He engaged in a collaboration with two colleagues at Queensland University, Dr Marcello La Rosa and Eike Bernhard, on an initiative on the development of an advanced BPM technology - an Advanced Process Model Repository called Apromore. In this Column, they use the example of Apromore to showcase how BPM technologies are conceived, designed, developed and applied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines the Rural Schools of Queensland. Starting with Nambour in 1917, the scheme incorporated thirty schools, and operated for over forty years. The rhetoric of the day was that boys and girls from the senior classes of primary school would be provided with elementary instruction of a practical character. In reality, the subjects taught were specifically tailored to provide farm skills to children in rural centres engaged in farming, dairying or fruit growing. Linked to each Rural School was a number of smaller surrounding schools, students from which travelled to the Rural School for special agricultural or domestic instruction. Through this action, the Queensland Department of Public Instruction left no doubt it intended to provide educational support for agrarian change and development within the state; in effect, they had set in motion the creation of a Queensland yeoman class. The Department’s intention was to arrest or reverse the trend toward urbanisation — whilst increasing agricultural productivity — through the making of a farmer born of the land and accepting of the new scientific advances in agriculture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Advances in algorithms for approximate sampling from a multivariable target function have led to solutions to challenging statistical inference problems that would otherwise not be considered by the applied scientist. Such sampling algorithms are particularly relevant to Bayesian statistics, since the target function is the posterior distribution of the unobservables given the observables. In this thesis we develop, adapt and apply Bayesian algorithms, whilst addressing substantive applied problems in biology and medicine as well as other applications. For an increasing number of high-impact research problems, the primary models of interest are often sufficiently complex that the likelihood function is computationally intractable. Rather than discard these models in favour of inferior alternatives, a class of Bayesian "likelihoodfree" techniques (often termed approximate Bayesian computation (ABC)) has emerged in the last few years, which avoids direct likelihood computation through repeated sampling of data from the model and comparing observed and simulated summary statistics. In Part I of this thesis we utilise sequential Monte Carlo (SMC) methodology to develop new algorithms for ABC that are more efficient in terms of the number of model simulations required and are almost black-box since very little algorithmic tuning is required. In addition, we address the issue of deriving appropriate summary statistics to use within ABC via a goodness-of-fit statistic and indirect inference. Another important problem in statistics is the design of experiments. That is, how one should select the values of the controllable variables in order to achieve some design goal. The presences of parameter and/or model uncertainty are computational obstacles when designing experiments but can lead to inefficient designs if not accounted for correctly. The Bayesian framework accommodates such uncertainties in a coherent way. If the amount of uncertainty is substantial, it can be of interest to perform adaptive designs in order to accrue information to make better decisions about future design points. This is of particular interest if the data can be collected sequentially. In a sense, the current posterior distribution becomes the new prior distribution for the next design decision. Part II of this thesis creates new algorithms for Bayesian sequential design to accommodate parameter and model uncertainty using SMC. The algorithms are substantially faster than previous approaches allowing the simulation properties of various design utilities to be investigated in a more timely manner. Furthermore the approach offers convenient estimation of Bayesian utilities and other quantities that are particularly relevant in the presence of model uncertainty. Finally, Part III of this thesis tackles a substantive medical problem. A neurological disorder known as motor neuron disease (MND) progressively causes motor neurons to no longer have the ability to innervate the muscle fibres, causing the muscles to eventually waste away. When this occurs the motor unit effectively ‘dies’. There is no cure for MND, and fatality often results from a lack of muscle strength to breathe. The prognosis for many forms of MND (particularly amyotrophic lateral sclerosis (ALS)) is particularly poor, with patients usually only surviving a small number of years after the initial onset of disease. Measuring the progress of diseases of the motor units, such as ALS, is a challenge for clinical neurologists. Motor unit number estimation (MUNE) is an attempt to directly assess underlying motor unit loss rather than indirect techniques such as muscle strength assessment, which generally is unable to detect progressions due to the body’s natural attempts at compensation. Part III of this thesis builds upon a previous Bayesian technique, which develops a sophisticated statistical model that takes into account physiological information about motor unit activation and various sources of uncertainties. More specifically, we develop a more reliable MUNE method by applying marginalisation over latent variables in order to improve the performance of a previously developed reversible jump Markov chain Monte Carlo sampler. We make other subtle changes to the model and algorithm to improve the robustness of the approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract: The paper investigates the geographical mobility of creative workers in China,focusing on the authors’ survey of workers in the animation industry. More specifically,the authors use the Reilly-Converse model and GIS tools to probe the locational choices of Chinese animation workers in Beijing and Shanghai by analyzing such factors of spatial attractiveness as home town, place of residence, and university from which the worker graduated. The paper compares the creative milieus in Beijing and Shanghai, and demonstrates that the “personal trajectory” of human capital is a key determinant of occupational location. The results of the authors’ survey highlight the limitations of Richard Florida’s 3T theory in the Chinese context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction QC and EQA are integral to good pathology laboratory practice. Medical Laboratory Science students undertake a project exploring internal QC and EQA procedures used in chemical pathology laboratories. Each student represents an individual lab and the class group represents the peer group of labs performing the same assay using the same method. Methods Using a manual BCG assay for serum albumin, normal and abnormal controls are run with a patient sample over 7 weeks. The QC results are assessed each week using calculated z-scores and both 2S & 3S control rules to determine whether a run is ‘in control’. At the end of the 7 weeks a completed LJ chart is assessed using the Westgard Multirules. Students investigate causes of error and the implications for both lab practice and patient care if runs are not ‘in control’. Twice in the 7 weeks two EQA samples (with target values unknown) are assayed alongside the weekly QC and patient samples. Results from each student are collated and form the basis of an EQA program. ALP are provided and students complete a Youden Plot, which is used to analyse the performance of each ‘lab’ and the method to identify bias. Students explore the concept of possible clinical implications of a biased method and address the actions that should be taken if a lab is not in consensus with the peer group. Conclusion This project is a model of ‘real world’ practice in which student demonstrate an understanding of the importance of QC procedures in a pathology laboratory, apply and interpret statistics and QC rules and charts, apply critical thinking and analytical skills to quality performance data to make recommendations for further practice and improve their technical competence and confidence.