134 resultados para Models and Methods
Resumo:
Problems involving the solution of advection-diffusion-reaction equations on domains and subdomains whose growth affects and is affected by these equations, commonly arise in developmental biology. Here, a mathematical framework for these situations, together with methods for obtaining spatio-temporal solutions and steady states of models built from this framework, is presented. The framework and methods are applied to a recently published model of epidermal skin substitutes. Despite the use of Eulerian schemes, excellent agreement is obtained between the numerical spatio-temporal, numerical steady state, and analytical solutions of the model.
Resumo:
This paper addresses the problem of discovering business process models from event logs. Existing approaches to this problem strike various tradeoffs between accuracy and understandability of the discovered models. With respect to the second criterion, empirical studies have shown that block-structured process models are generally more understandable and less error-prone than unstructured ones. Accordingly, several automated process discovery methods generate block-structured models by construction. These approaches however intertwine the concern of producing accurate models with that of ensuring their structuredness, sometimes sacrificing the former to ensure the latter. In this paper we propose an alternative approach that separates these two concerns. Instead of directly discovering a structured process model, we first apply a well-known heuristic technique that discovers more accurate but sometimes unstructured (and even unsound) process models, and then transform the resulting model into a structured one. An experimental evaluation shows that our “discover and structure” approach outperforms traditional “discover structured” approaches with respect to a range of accuracy and complexity measures.
Resumo:
Despite an ostensibly technology-driven society, the ability to communicate orally is still seen as an essential ability for students at school and university, as it is for graduates in the workplace. The need to develop effective oral communication skills is often tied to future work-related tasks. One tangible way that educators have assessed proficiency in this area is through prepared oral presentations. While some use the terms oral communication and oral presentation interchangeably, other writers question the role more formal presentations play in the overall development of oral communication skills. Adding to the discussion, this paper is part of a larger study examining the knowledge and skills students bring into the academy from previous educational experiences. The study examines some of the teaching and assessment methods used in secondary schools to develop oral communication skills through the use of formal oral presentations. Specifically, it will look at assessment models and how these are used as a form of instruction as well as how they contribute to an accurate evaluation of student abilities. The purpose of this paper is to explore key terms and identify tensions between expectations and practice. Placing the emphasis on the ‘oral’ aspect of this form of communication this paper will particularly look at the ‘delivery’ element of the process.
Resumo:
Realistic estimates of short- and long-term (strategic) budgets for maintenance and rehabilitation of road assessment management should consider the stochastic characteristics of asset conditions of the road networks so that the overall variability of road asset data conditions is taken into account. The probability theory has been used for assessing life-cycle costs for bridge infrastructures by Kong and Frangopol (2003), Zayed et.al. (2002), Kong and Frangopol (2003), Liu and Frangopol (2004), Noortwijk and Frangopol (2004), Novick (1993). Salem 2003 cited the importance of the collection and analysis of existing data on total costs for all life-cycle phases of existing infrastructure, including bridges, road etc., and the use of realistic methods for calculating the probable useful life of these infrastructures (Salem et. al. 2003). Zayed et. al. (2002) reported conflicting results in life-cycle cost analysis using deterministic and stochastic methods. Frangopol et. al. 2001 suggested that additional research was required to develop better life-cycle models and tools to quantify risks, and benefits associated with infrastructures. It is evident from the review of the literature that there is very limited information on the methodology that uses the stochastic characteristics of asset condition data for assessing budgets/costs for road maintenance and rehabilitation (Abaza 2002, Salem et. al. 2003, Zhao, et. al. 2004). Due to this limited information in the research literature, this report will describe and summarise the methodologies presented by each publication and also suggest a methodology for the current research project funded under the Cooperative Research Centre for Construction Innovation CRC CI project no 2003-029-C.
Resumo:
This paper reviews the main development of approaches to modelling urban public transit users’ route choice behaviour from 1960s to the present. The approaches reviewed include the early heuristic studies on finding the least cost transit route and all-or-nothing transit assignment, the bus common line problem and corresponding network representation methods, the disaggregate discrete choice models which are based on random utility maximization assumptions, the deterministic use equilibrium and stochastic user equilibrium transit assignment models, and the recent dynamic transit assignment models using either frequency or schedule based network formulation. In addition to reviewing past outcomes, this paper also gives an outlook into the possible future directions of modelling transit users’ route choice behaviour. Based on the comparison with the development of models for motorists’ route choice and traffic assignment problems in an urban road area, this paper points out that it is rewarding for transit route choice research to draw inspiration from the intellectual outcomes out of the road area. Particularly, in light of the recent advancement of modelling motorists’ complex road route choice behaviour, this paper advocates that the modelling practice of transit users’ route choice should further explore the complexities of the problem.
Resumo:
This research examines how men react to male models in print advertisements. In two experiments, we show that the gender identity of men influences their responses to advertisements featuring a masculine, feminine, or androgynous male model. In addition, we explore the extent to which men feel they will be classified by others as similar to the model as a mechanism for these effects. Specifically, masculine men respond most favorably to masculine models and are negative toward feminine models. In contrast, feminine men prefer feminine models when their private self is salient. Yet in a collective context, they prefer masculine models.These experiments shed light on how gender identity and self-construal influence male evaluations and illustrate the social pressure on men to endorse traditional masculine portrayals. We also present implications for advertising practice.
Resumo:
In two experiments, we show that the beliefs women have about the controllability of their weight (i.e., weight locus of control) influences their responses to advertisements featuring a larger-sized female model or a slim female model. Further, we examine self-referencing as a mechanism for these effects. Specifically, people who believe they can control their weight (“internals”), respond most favorably to slim models in advertising, and this favorable response is mediated by self-referencing. In contrast, people who feel powerless about their weight (“externals”), self-reference larger-sized models, but only prefer larger-sized models when the advertisement is for a non-fattening product. For fattening products, they exhibit a similar preference for larger-sized models and slim models. Together, these experiments shed light on the effect of model body size and the role of weight locus of control in influencing consumer attitudes.
Resumo:
Evidence-based Practice (EBP) has recently emerged as a topic of discussion amongst professionals within the library and information services (LIS) industry. Simply stated, EBP is the process of using formal research skills and methods to assist in decision making and establishing best practice. The emerging interest in EBP within the library context serves to remind the library profession that research skills and methods can help ensure that the library industry remains current and relevant in changing times. The LIS sector faces ongoing challenges in terms of the expectation that financial and human resources will be managed efficiently, particularly if library budgets are reduced and accountability to the principal stakeholders is increased. Library managers are charged with the responsibility to deliver relevant and cost effective services, in an environment characterised by rapidly changing models of information provision, information access and user behaviours. Consequently they are called upon not only to justify the services they provide, or plan to introduce, but also to measure the effectiveness of these services and to evaluate the impact on the communities they serve. The imperative for innovation in and enhancements to library practice is accompanied by the need for a strong understanding of the processes of review, measurement, assessment and evaluation. In 2001 the Centre for Information Research was commissioned by the Chartered Institute of Library and Information Professionals (CILIP) in the UK to conduct an examination into the research landscape for library and information science. The examination concluded that research is “important for the LIS [library and information science] domain in a number of ways” (McNicol & Nankivell, 2001, p.77). At the professional level, research can inform practice, assist in the future planning of the profession, raise the profile of the discipline, and indeed the reputation and standing of the library and information service itself. At the personal level, research can “broaden horizons and offer individuals development opportunities” (McNicol & Nankivell, 2001, p.77). The study recommended that “research should be promoted as a valuable professional activity for practitioners to engage in” (McNicol & Nankivell, 2001, p.82). This chapter will consider the role of EBP within the library profession. A brief review of key literature in the area is provided. The review considers issues of definition and terminology, highlights the importance of research in professional practice and outlines the research approaches that underpin EBP. The chapter concludes with a consideration of the specific application of EBP within the dynamic and evolving field of information literacy (IL).
Resumo:
This article presents a survey of authorisation models and considers their ‘fitness-for-purpose’ in facilitating information sharing. Network-supported information sharing is an important technical capability that underpins collaboration in support of dynamic and unpredictable activities such as emergency response, national security, infrastructure protection, supply chain integration and emerging business models based on the concept of a ‘virtual organisation’. The article argues that present authorisation models are inflexible and poorly scalable in such dynamic environments due to their assumption that the future needs of the system can be predicted, which in turn justifies the use of persistent authorisation policies. The article outlines the motivation and requirement for a new flexible authorisation model that addresses the needs of information sharing. It proposes that a flexible and scalable authorisation model must allow an explicit specification of the objectives of the system and access decisions must be made based on a late trade-off analysis between these explicit objectives. A research agenda for the proposed Objective-based Access Control concept is presented.
Resumo:
Fire design is an essential element of the overall design procedure of structural steel members and systems. Conventionally the fire rating of load-bearing stud wall systems made of light gauge steel frames (LSF) is based on approximate prescriptive methods developed on the basis of limited fire tests. This design is limited to standard wall configurations used by the industry. Increased fire rating is provided simply by adding more plasterboards to the stud walls. This is not an acceptable situation as it not only inhibits innovation and structural and cost efficiencies but also casts doubt over the fire safety of these light gauge steel stud wall systems. Hence a detailed fire research study into the performance and effectiveness of a recently developed innovative composite panel wall system was undertaken at Queensland University of Technology using both full scale fire tests and numerical studies. Experimental results of LSF walls using the new composite panels under axial compression load have shown the improvement in fire performance and fire resistance rating. Numerical analyses are currently being undertaken using the finite element program ABAQUS. Measured temperature profiles of the studs are used in the numerical models and the results are used to calibrate against full scale test results. The validated model will be used in a detailed parametric study with an aim to develop suitable design rules within the current cold-formed steel structures and fire design standards. This paper will present the results of experimental and numerical investigations into the structural and fire behaviour of light gauge steel stud walls protected by the new composite panel. It will demonstrate the improvements provided by the new composite panel system in comparison to traditional wall systems.
Resumo:
Recent years have seen an increased uptake of business process management technology in industries. This has resulted in organizations trying to manage large collections of business process models. One of the challenges facing these organizations concerns the retrieval of models from large business process model repositories. For example, in some cases new process models may be derived from existing models, thus finding these models and adapting them may be more effective than developing them from scratch. As process model repositories may be large, query evaluation may be time consuming. Hence, we investigate the use of indexes to speed up this evaluation process. Experiments are conducted to demonstrate that our proposal achieves a significant reduction in query evaluation time.
Resumo:
This thesis addresses computational challenges arising from Bayesian analysis of complex real-world problems. Many of the models and algorithms designed for such analysis are ‘hybrid’ in nature, in that they are a composition of components for which their individual properties may be easily described but the performance of the model or algorithm as a whole is less well understood. The aim of this research project is to after a better understanding of the performance of hybrid models and algorithms. The goal of this thesis is to analyse the computational aspects of hybrid models and hybrid algorithms in the Bayesian context. The first objective of the research focuses on computational aspects of hybrid models, notably a continuous finite mixture of t-distributions. In the mixture model, an inference of interest is the number of components, as this may relate to both the quality of model fit to data and the computational workload. The analysis of t-mixtures using Markov chain Monte Carlo (MCMC) is described and the model is compared to the Normal case based on the goodness of fit. Through simulation studies, it is demonstrated that the t-mixture model can be more flexible and more parsimonious in terms of number of components, particularly for skewed and heavytailed data. The study also reveals important computational issues associated with the use of t-mixtures, which have not been adequately considered in the literature. The second objective of the research focuses on computational aspects of hybrid algorithms for Bayesian analysis. Two approaches will be considered: a formal comparison of the performance of a range of hybrid algorithms and a theoretical investigation of the performance of one of these algorithms in high dimensions. For the first approach, the delayed rejection algorithm, the pinball sampler, the Metropolis adjusted Langevin algorithm, and the hybrid version of the population Monte Carlo (PMC) algorithm are selected as a set of examples of hybrid algorithms. Statistical literature shows how statistical efficiency is often the only criteria for an efficient algorithm. In this thesis the algorithms are also considered and compared from a more practical perspective. This extends to the study of how individual algorithms contribute to the overall efficiency of hybrid algorithms, and highlights weaknesses that may be introduced by the combination process of these components in a single algorithm. The second approach to considering computational aspects of hybrid algorithms involves an investigation of the performance of the PMC in high dimensions. It is well known that as a model becomes more complex, computation may become increasingly difficult in real time. In particular the importance sampling based algorithms, including the PMC, are known to be unstable in high dimensions. This thesis examines the PMC algorithm in a simplified setting, a single step of the general sampling, and explores a fundamental problem that occurs in applying importance sampling to a high-dimensional problem. The precision of the computed estimate from the simplified setting is measured by the asymptotic variance of the estimate under conditions on the importance function. Additionally, the exponential growth of the asymptotic variance with the dimension is demonstrated and we illustrates that the optimal covariance matrix for the importance function can be estimated in a special case.
Resumo:
A pragmatic method for assessing the accuracy and precision of a given processing pipeline required for converting computed tomography (CT) image data of bones into representative three dimensional (3D) models of bone shapes is proposed. The method is based on coprocessing a control object with known geometry which enables the assessment of the quality of resulting 3D models. At three stages of the conversion process, distance measurements were obtained and statistically evaluated. For this study, 31 CT datasets were processed. The final 3D model of the control object contained an average deviation from reference values of −1.07±0.52 mm standard deviation (SD) for edge distances and −0.647±0.43 mm SD for parallel side distances of the control object. Coprocessing a reference object enables the assessment of the accuracy and precision of a given processing pipeline for creating CTbased 3D bone models and is suitable for detecting most systematic or human errors when processing a CT-scan. Typical errors have about the same size as the scan resolution.
Resumo:
Although many different materials, techniques and methods, including artificial or engineered bone substitutes, have been used to repair various bone defects, the restoration of critical-sized bone defects caused by trauma, surgery or congenital malformation is still a great challenge to orthopedic surgeons. One important fact that has been neglected in the pursuit of resolutions for large bone defect healing is that most physiological bone defect healing needs the periosteum and stripping off the periosteum may result in non-union or non-healed bone defects. Periosteum plays very important roles not only in bone development but also in bone defect healing. The purpose of this project was to construct a functional periosteum in vitro using a single stem cell source and then test its ability to aid the repair of critical-sized bone defect in animal models. This project was designed with three separate but closely-linked parts which in the end led to four independent papers. The first part of this study investigated the structural and cellular features in periostea from diaphyseal and metaphyseal bone surfaces in rats of different ages or with osteoporosis. Histological and immunohistological methods were used in this part of the study. Results revealed that the structure and cell populations in periosteum are both age-related and site-specific. The diaphyseal periosteum showed age-related degeneration, whereas the metaphyseal periosteum is more destructive in older aged rats. The periosteum from osteoporotic bones differs from normal bones both in terms of structure and cell populations. This is especially evident in the cambial layer of the metaphyseal area. Bone resorption appears to be more active in the periosteum from osteoporotic bones, whereas bone formation activity is comparable between the osteoporotic and normal bone. The dysregulation of bone resorption and formation in the periosteum may also be the effect of the interaction between various neural pathways and the cell populations residing within it. One of the most important aspects in periosteum engineering is how to introduce new blood vessels into the engineered periosteum to help form vascularized bone tissues in bone defect areas. The second part of this study was designed to investigate the possibility of differentiating bone marrow stromal cells (BMSCs) into the endothelial cells and using them to construct vascularized periosteum. The endothelial cell differentiation of BMSCs was induced in pro-angiogenic media under both normoxia and CoCl2 (hypoxia-mimicking agent)-induced hypoxia conditions. The VEGF/PEDF expression pattern, endothelial cell specific marker expression, in vitro and in vivo vascularization ability of BMSCs cultured in different situations were assessed. Results revealed that BMSCs most likely cannot be differentiated into endothelial cells through the application of pro-angiogenic growth factors or by culturing under CoCl2-induced hypoxic conditions. However, they may be involved in angiogenesis as regulators under both normoxia and hypoxia conditions. Two major angiogenesis-related growth factors, VEGF (pro-angiogenic) and PEDF (anti-angiogenic) were found to have altered their expressions in accordance with the extracellular environment. BMSCs treated with the hypoxia-mimicking agent CoCl2 expressed more VEGF and less PEDF and enhanced the vascularization of subcutaneous implants in vivo. Based on the findings of the second part, the CoCl2 pre-treated BMSCs were used to construct periosteum, and the in vivo vascularization and osteogenesis of the constructed periosteum were assessed in the third part of this project. The findings of the third part revealed that BMSCs pre-treated with CoCl2 could enhance both ectopic and orthotopic osteogenesis of BMSCs-derived osteoblasts and vascularization at the early osteogenic stage, and the endothelial cells (HUVECs), which were used as positive control, were only capable of promoting osteogenesis after four-weeks. The subcutaneous area of the mouse is most likely inappropriate for assessing new bone formation on collagen scaffolds. This study demonstrated the potential application of CoCl2 pre-treated BMSCs in the tissue engineering not only for periosteum but also bone or other vascularized tissues. In summary, the structure and cell populations in periosteum are age-related, site-specific and closely linked with bone health status. BMSCs as a stem cell source for periosteum engineering are not endothelial cell progenitors but regulators, and CoCl2-treated BMSCs expressed more VEGF and less PEDF. These CoCl2-treated BMSCs enhanced both vascularization and osteogenesis in constructed periosteum transplanted in vivo.
Resumo:
A national-level safety analysis tool is needed to complement existing analytical tools for assessment of the safety impacts of roadway design alternatives. FHWA has sponsored the development of the Interactive Highway Safety Design Model (IHSDM), which is roadway design and redesign software that estimates the safety effects of alternative designs. Considering the importance of IHSDM in shaping the future of safety-related transportation investment decisions, FHWA justifiably sponsored research with the sole intent of independently validating some of the statistical models and algorithms in IHSDM. Statistical model validation aims to accomplish many important tasks, including (a) assessment of the logical defensibility of proposed models, (b) assessment of the transferability of models over future time periods and across different geographic locations, and (c) identification of areas in which future model improvements should be made. These three activities are reported for five proposed types of rural intersection crash prediction models. The internal validation of the model revealed that the crash models potentially suffer from omitted variables that affect safety, site selection and countermeasure selection bias, poorly measured and surrogate variables, and misspecification of model functional forms. The external validation indicated the inability of models to perform on par with model estimation performance. Recommendations for improving the state of the practice from this research include the systematic conduct of carefully designed before-and-after studies, improvements in data standardization and collection practices, and the development of analytical methods to combine the results of before-and-after studies with cross-sectional studies in a meaningful and useful way.