947 resultados para curved-layer fused deposition modelling (FDM)
Resumo:
With the growth of the Web, E-commerce activities are also becoming popular. Product recommendation is an effective way of marketing a product to potential customers. Based on a user’s previous searches, most recommendation methods employ two dimensional models to find relevant items. Such items are then recommended to a user. Further too many irrelevant recommendations worsen the information overload problem for a user. This happens because such models based on vectors and matrices are unable to find the latent relationships that exist between users and searches. Identifying user behaviour is a complex process, and usually involves comparing searches made by him. In most of the cases traditional vector and matrix based methods are used to find prominent features as searched by a user. In this research we employ tensors to find relevant features as searched by users. Such relevant features are then used for making recommendations. Evaluation on real datasets show the effectiveness of such recommendations over vector and matrix based methods.
Resumo:
A better understanding of the behaviour of prepared cane and bagasse, especially the ability to model the mechanical behaviour of bagasse as it is squeezed in a milling unit to extract juice, would help identify how to improve the current milling process; for example to reduce final bagasse moisture. Previous investigations have proven with certainty that juice flow through bagasse obeys Darcy’s permeability law, that the grip of the rough surface of the grooves on the bagasse can be represented by the Mohr- Coulomb failure criterion for soils, and that the internal mechanical behaviour of the bagasse can be represented by critical state behaviour similar to that of sand and clay. Current Finite Element Models (FEM) available in commercial software have adequate permeability models. However, commercial software does not contain an adequate mechanical model for bagasse. Progress has been made in the last ten years towards implementing a mechanical model for bagasse in finite element software code. This paper builds on that progress and carries out a further step towards obtaining an adequate material model. In particular, the prediction of volume change during shearing of normally consolidated final bagasse is addressed.
Resumo:
In this work, we present the development of a Pt/graphene/SiC device for hydrogen gas sensing. A single layer of graphene was deposited on 6H-SiC via chemical vapor deposition. The presence of graphene C-C bonds was observed via X-ray photoelectron spectroscopy analysis. Current-voltage characteristics of the device were measured at the presence of hydrogen at different temperatures, from 25°C to 170°C. The dynamic response of the device was recorded towards hydrogen gas at an optimum temperature of 130°C. A voltage shift of 191 mV was recorded towards 1% hydrogen at −1 mA constant current.
Resumo:
Technologies and languages for integrated processes are a relatively recent innovation. Over that period many divergent waves of innovation have transformed process integration. Like sockets and distributed objects, early workflow systems ordered programming interfaces that connected the process modelling layer to any middleware. BPM systems emerged later, connecting the modelling world to middleware through components. While BPM systems increased ease of use (modelling convenience), long-standing and complex interactions involving many process instances remained di±cult to model. Enterprise Service Buses (ESBs), followed, connecting process models to heterogeneous forms of middleware. ESBs, however, generally forced modellers to choose a particular underlying middleware and to stick to it, despite their ability to connect with many forms of middleware. Furthermore ESBs encourage process integrations to be modelled on their own, logically separate from the process model. This can lead to the inability to reason about long standing conversations at the process layer. Technologies and languages for process integration generally lack formality. This has led to arbitrariness in the underlying language building blocks. Conceptual holes exist in a range of technologies and languages for process integration and this can lead to customer dissatisfaction and failure to bring integration projects to reach their potential. Standards for process integration share similar fundamental flaws to languages and technologies. Standards are also in direct competition with other standards causing a lack of clarity. Thus the area of greatest risk in a BPM project remains process integration, despite major advancements in the technology base. This research examines some fundamental aspects of communication middleware and how these fundamental building blocks of integration can be brought to the process modelling layer in a technology agnostic manner. This way process modelling can be conceptually complete without becoming stuck in a particular middleware technology. Coloured Petri nets are used to define a formal semantics for the fundamental aspects of communication middleware. They provide the means to define and model the dynamic aspects of various integration middleware. Process integration patterns are used as a tool to codify common problems to be solved. Object Role Modelling is a formal modelling technique that was used to define the syntax of a proposed process integration language. This thesis provides several contributions to the field of process integration. It proposes a framework defining the key notions of integration middleware. This framework provides a conceptual foundation upon which a process integration language could be built. The thesis defines an architecture that allows various forms of middleware to be aggregated and reasoned about at the process layer. This thesis provides a comprehensive set of process integration patterns. These constitute a benchmark for the kinds of problems a process integration language must support. The thesis proposes a process integration modelling language and a partial implementation that is able to enact the language. A process integration pilot project in a German hospital is brie°y described at the end of the thesis. The pilot is based on ideas in this thesis.
Resumo:
Modelling the power systems load is a challenge since the load level and composition varies with time. An accurate load model is important because there is a substantial component of load dynamics in the frequency range relevant to system stability. The composition of loads need to be charaterised because the time constants of composite loads affect the damping contributions of the loads to power system oscillations, and their effects vary with the time of the day, depending on the mix of motors loads. This chapter has two main objectives: 1) describe the load modelling in small signal using on-line measurements; and 2) present a new approach to develop models that reflect the load response to large disturbances. Small signal load characterisation based on on-line measurements allows predicting the composition of load with improved accuracy compared with post-mortem or classical load models. Rather than a generic dynamic model for small signal modelling of the load, an explicit induction motor is used so the performance for larger disturbances can be more reliably inferred. The relation between power and frequency/voltage can be explicitly formulated and the contribution of induction motors extracted. One of the main features of this work is the induction motor component can be associated to nominal powers or equivalent motors
Resumo:
Handling information overload online, from the user's point of view is a big challenge, especially when the number of websites is growing rapidly due to growth in e-commerce and other related activities. Personalization based on user needs is the key to solving the problem of information overload. Personalization methods help in identifying relevant information, which may be liked by a user. User profile and object profile are the important elements of a personalization system. When creating user and object profiles, most of the existing methods adopt two-dimensional similarity methods based on vector or matrix models in order to find inter-user and inter-object similarity. Moreover, for recommending similar objects to users, personalization systems use the users-users, items-items and users-items similarity measures. In most cases similarity measures such as Euclidian, Manhattan, cosine and many others based on vector or matrix methods are used to find the similarities. Web logs are high-dimensional datasets, consisting of multiple users, multiple searches with many attributes to each. Two-dimensional data analysis methods may often overlook latent relationships that may exist between users and items. In contrast to other studies, this thesis utilises tensors, the high-dimensional data models, to build user and object profiles and to find the inter-relationships between users-users and users-items. To create an improved personalized Web system, this thesis proposes to build three types of profiles: individual user, group users and object profiles utilising decomposition factors of tensor data models. A hybrid recommendation approach utilising group profiles (forming the basis of a collaborative filtering method) and object profiles (forming the basis of a content-based method) in conjunction with individual user profiles (forming the basis of a model based approach) is proposed for making effective recommendations. A tensor-based clustering method is proposed that utilises the outcomes of popular tensor decomposition techniques such as PARAFAC, Tucker and HOSVD to group similar instances. An individual user profile, showing the user's highest interest, is represented by the top dimension values, extracted from the component matrix obtained after tensor decomposition. A group profile, showing similar users and their highest interest, is built by clustering similar users based on tensor decomposed values. A group profile is represented by the top association rules (containing various unique object combinations) that are derived from the searches made by the users of the cluster. An object profile is created to represent similar objects clustered on the basis of their similarity of features. Depending on the category of a user (known, anonymous or frequent visitor to the website), any of the profiles or their combinations is used for making personalized recommendations. A ranking algorithm is also proposed that utilizes the personalized information to order and rank the recommendations. The proposed methodology is evaluated on data collected from a real life car website. Empirical analysis confirms the effectiveness of recommendations made by the proposed approach over other collaborative filtering and content-based recommendation approaches based on two-dimensional data analysis methods.
Resumo:
Mixture models are a flexible tool for unsupervised clustering that have found popularity in a vast array of research areas. In studies of medicine, the use of mixtures holds the potential to greatly enhance our understanding of patient responses through the identification of clinically meaningful clusters that, given the complexity of many data sources, may otherwise by intangible. Furthermore, when developed in the Bayesian framework, mixture models provide a natural means for capturing and propagating uncertainty in different aspects of a clustering solution, arguably resulting in richer analyses of the population under study. This thesis aims to investigate the use of Bayesian mixture models in analysing varied and detailed sources of patient information collected in the study of complex disease. The first aim of this thesis is to showcase the flexibility of mixture models in modelling markedly different types of data. In particular, we examine three common variants on the mixture model, namely, finite mixtures, Dirichlet Process mixtures and hidden Markov models. Beyond the development and application of these models to different sources of data, this thesis also focuses on modelling different aspects relating to uncertainty in clustering. Examples of clustering uncertainty considered are uncertainty in a patient’s true cluster membership and accounting for uncertainty in the true number of clusters present. Finally, this thesis aims to address and propose solutions to the task of comparing clustering solutions, whether this be comparing patients or observations assigned to different subgroups or comparing clustering solutions over multiple datasets. To address these aims, we consider a case study in Parkinson’s disease (PD), a complex and commonly diagnosed neurodegenerative disorder. In particular, two commonly collected sources of patient information are considered. The first source of data are on symptoms associated with PD, recorded using the Unified Parkinson’s Disease Rating Scale (UPDRS) and constitutes the first half of this thesis. The second half of this thesis is dedicated to the analysis of microelectrode recordings collected during Deep Brain Stimulation (DBS), a popular palliative treatment for advanced PD. Analysis of this second source of data centers on the problems of unsupervised detection and sorting of action potentials or "spikes" in recordings of multiple cell activity, providing valuable information on real time neural activity in the brain.
Resumo:
The natural convection thermal boundary layer adjacent to an inclined flat plate and inclined walls of an attic space subject to instantaneous and ramp heating and cooling is investigated. A scaling analysis has been performed to describe the flow behaviour and heat transfer. Major scales quantifying the flow velocity, flow development time, heat transfer and the thermal and viscous boundary layer thicknesses at different stages of the flow development are established. Scaling relations of heating-up and cooling-down times and heat transfer rates have also been reported for the case of attic space. The scaling relations have been verified by numerical simulations over a wide range of parameters. Further, a periodic temperature boundary condition is also considered to show the flow features in the attic space over diurnal cycles.
Resumo:
Magnetohydrodynamic (MHD) natural convection laminar flow from an iso-thermal horizontal circular cylinder immersed in a fluid with viscosity proportional to a linear function of temperature will be discussed with numerical simulations. The governing boundary layer equations are transformed into a non-dimensional form and the resulting nonlinear system of partial differential equa-tions are reduced to convenient form, which are solved numerically by two very efficient methods, namely, (i) Implicit finite difference method together with Keller box scheme and (ii) Direct numerical scheme. Numerical results are presented by velocity and temperature distributions of the fluid as well as heat transfer characteristics, namely the shearing stress and the local heat transfer rate in terms of the local skin-friction coefficient and the local Nusselt number for a wide range of magnetohydrodynamic parameter, viscosity-variation parameter and viscous dissipation parameter. MHD flow in this geometry with temperature dependent viscosity is absent in the literature. The results obtained from the numerical simulations have been veri-fied by two methodologies.
Resumo:
This paper argues for a renewed focus on statistical reasoning in the beginning school years, with opportunities for children to engage in data modelling. Results are reported from the first year of a 3-year longitudinal study in which three classes of first-grade children (6-year-olds) and their teachers engaged in data modelling activities. The theme of Looking after our Environment, part of the children’s science curriculum, provided the task context. The goals for the two activities addressed here included engaging children in core components of data modelling, namely, selecting attributes, structuring and representing data, identifying variation in data, and making predictions from given data. Results include the various ways in which children represented and re represented collected data, including attribute selection, and the metarepresentational competence they displayed in doing so. The “data lenses” through which the children dealt with informal inference (variation and prediction) are also reported.
Resumo:
Modern technology now has the ability to generate large datasets over space and time. Such data typically exhibit high autocorrelations over all dimensions. The field trial data motivating the methods of this paper were collected to examine the behaviour of traditional cropping and to determine a cropping system which could maximise water use for grain production while minimising leakage below the crop root zone. They consist of moisture measurements made at 15 depths across 3 rows and 18 columns, in the lattice framework of an agricultural field. Bayesian conditional autoregressive (CAR) models are used to account for local site correlations. Conditional autoregressive models have not been widely used in analyses of agricultural data. This paper serves to illustrate the usefulness of these models in this field, along with the ease of implementation in WinBUGS, a freely available software package. The innovation is the fitting of separate conditional autoregressive models for each depth layer, the ‘layered CAR model’, while simultaneously estimating depth profile functions for each site treatment. Modelling interest also lay in how best to model the treatment effect depth profiles, and in the choice of neighbourhood structure for the spatial autocorrelation model. The favoured model fitted the treatment effects as splines over depth, and treated depth, the basis for the regression model, as measured with error, while fitting CAR neighbourhood models by depth layer. It is hierarchical, with separate onditional autoregressive spatial variance components at each depth, and the fixed terms which involve an errors-in-measurement model treat depth errors as interval-censored measurement error. The Bayesian framework permits transparent specification and easy comparison of the various complex models compared.
Resumo:
Three-dimensional wagon train models have been developed for the crashworthiness analysis using multi-body dynamics approach. The contributions of the train size (number of wagon) to the frontal crash forces can be identified through the simulations. The effects of crash energy management (CEM) design and crash speed on train crashworthiness are examined. The CEM design can significantly improve the train crashworthiness and the consequential vehicle stability performance - reducing derailment risks.
Resumo:
The growth of solid tumours beyond a critical size is dependent upon angiogenesis, the formation of new blood vessels from an existing vasculature. Tumours may remain dormant at microscopic sizes for some years before switching to a mode in which growth of a supportive vasculature is initiated. The new blood vessels supply nutrients, oxygen, and access to routes by which tumour cells may travel to other sites within the host (metastasize). In recent decades an abundance of biological research has focused on tumour-induced angiogenesis in the hope that treatments targeted at the vasculature may result in a stabilisation or regression of the disease: a tantalizing prospect. The complex and fascinating process of angiogenesis has also attracted the interest of researchers in the field of mathematical biology, a discipline that is, for mathematics, relatively new. The challenge in mathematical biology is to produce a model that captures the essential elements and critical dependencies of a biological system. Such a model may ultimately be used as a predictive tool. In this thesis we examine a number of aspects of tumour-induced angiogenesis, focusing on growth of the neovasculature external to the tumour. Firstly we present a one-dimensional continuum model of tumour-induced angiogenesis in which elements of the immune system or other tumour-cytotoxins are delivered via the newly formed vessels. This model, based on observations from experiments by Judah Folkman et al., is able to show regression of the tumour for some parameter regimes. The modelling highlights a number of interesting aspects of the process that may be characterised further in the laboratory. The next model we present examines the initiation positions of blood vessel sprouts on an existing vessel, in a two-dimensional domain. This model hypothesises that a simple feedback inhibition mechanism may be used to describe the spacing of these sprouts with the inhibitor being produced by breakdown of the existing vessel's basement membrane. Finally, we have developed a stochastic model of blood vessel growth and anastomosis in three dimensions. The model has been implemented in C++, includes an openGL interface, and uses a novel algorithm for calculating proximity of the line segments representing a growing vessel. This choice of programming language and graphics interface allows for near-simultaneous calculation and visualisation of blood vessel networks using a contemporary personal computer. In addition the visualised results may be transformed interactively, and drop-down menus facilitate changes in the parameter values. Visualisation of results is of vital importance in the communication of mathematical information to a wide audience, and we aim to incorporate this philosophy in the thesis. As biological research further uncovers the intriguing processes involved in tumourinduced angiogenesis, we conclude with a comment from mathematical biologist Jim Murray, Mathematical biology is : : : the most exciting modern application of mathematics.
Resumo:
Abstract: LiteSteel beam (LSB) is a new cold-formed steel hollow flange channel beam produced using a patented manufacturing process involving simultaneous cold-forming and dual electric resistance welding. It has the beneficial characteristics of torsionally rigid closed rectangular flanges combined with economical fabrication processes from a single strip of high strength steel. Although the LSB sections are commonly used as flexural members, no research has been undertaken on the shear behaviour of LSBs. Therefore experimental and numerical studies were undertaken to investigate the shear behaviour and strength of LSBs. In this research finite element models of LSBs were developed to investigate their nonlinear shear behaviour including their buckling characteristics and ultimate shear strength. They were validated by comparing their results with available experimental results. The models provided full details of the shear buckling and strength characteristics of LSBs, and showed the presence of considerable improvements to web shear buckling in LSBs and associated post-buckling strength. This paper presents the details of the finite element models of LSBs and the results. Both finite element analysis and experimental results showed that the current design rules in cold-formed steel codes are very conservative for the shear design of LSBs. The ultimate shear capacities from finite element analyses confirmed the accuracy of proposed shear strength equations for LSBs based on the North American specification and DSM design equations. Developed finite element models were used to investigate the reduction to shear capacity of LSBs when full height web side plates were not used or when only one web side plate was used, and these results are also presented in this paper.