975 resultados para product modelling
Resumo:
Nitrous oxide is a major greenhouse gas emission. The aim of this research was to develop and apply statistical models to characterize the complex spatial and temporal variation in nitrous oxide emissions from soils under different land use conditions. This is critical when developing site-specific management plans to reduce nitrous oxide emissions. These studies can improve predictions and increase our understanding of environmental factors that influence nitrous oxide emissions. They also help to identify areas for future research, which can further improve the prediction of nitrous oxide in practice.
Resumo:
Learning is most effective when intrinsically motivated through personal interest, and situated in a supportive socio-cultural context. This paper reports on findings from a study that explored implications for design of interactive learning environments through 18 months of ethnographic observations of people’s interactions at “Hack The Evening” (HTE). HTE is a meetup group initiated at the State Library of Queensland in Brisbane, Australia, and dedicated to provide visitors with opportunities for connected learning in relation to hacking, making and do-it-yourself technology. The results provide insights into factors that contributed to HTE as a social, interactive and participatory environment for learning – knowledge is created and co-created through uncoordinated interactions among participants that come from a diversity of backgrounds, skills and areas of expertise. The insights also reveal challenges and barriers that the HTE group faced in regards to connected learning. Four dimensions of design opportunities are presented to overcome those challenges and barriers towards improving connected learning in library buildings and other free-choice learning environments that seek to embody a more interactive and participatory culture among their users. The insights are relevant for librarians as well as designers, managers and decision makers of other interactive and free-choice learning environments.
Resumo:
The export market for Australian wine continues to grow at a rapid rate, with imported wines also playing a role in market share in sales in Australia. It is estimated that over 60 per cent of all Australian wine is exported, while 12 per cent of wine consumed in Australia has overseas origins. In addition to understanding the size and direction (import or export) of wines, the foreign locales also play an important role in any tax considerations. While the export market for Australian produced alcohol continues to grow, it is into the Asian market that the most significant inroads are occurring. Sales into China of bottled wine over $7.50 per litre recently overtook the volume sold our traditional partners of the United States and Canada. It is becoming easier for even small to medium sized businesses to export their services or products overseas. However, it is vital for those businesses to understand the tax rules applying to any international transactions. Specifically, one of the first tax regimes that importers and exporters need to understand once they decide to establish a presence overseas is transfer pricing. These are the rules that govern the cross-border prices of goods, services and other transactions entered into between related parties. This paper is Part 2 of the seminar presented on transfer pricing and international tax issues which are particularly relevant to the wine industry. The predominant focus of Part 2 is to discuss four key areas likely to affect international expansion. First, the use of the available transfer pricing methodologies for international related party transactions is discussed. Second, the affects that double tax agreements will have on taking a business offshore are considered. Third, the risks associated with aggressive tax planning through tax information exchange agreements is reviewed. Finally, the paper predicts future ‘trip-wires’ and areas to ‘watch out for’ for practitioners dealing with clients operating in the international arena.
Resumo:
While there are many similarities between the languages of the various workflow management systems, there are also significant differences. One particular area of differences is caused by the fact that different systems impose different syntactic restrictions. In such cases, business analysts have to choose between either conforming to the language in their specifications or transforming these specifications afterwards. The latter option is preferable as this allows for a separation of concerns. In this paper we investigate to what extent such transformations are possible in the context of various syntactical restrictions (the most restrictive of which will be referred to as structured workflows). We also provide a deep insight into the consequences, particularly in terms of expressive power, of imposing such restrictions.
Resumo:
This paper deals with the failure of high adhesive, low compressive strength, thin layered polymer mortar joints in masonry through a contact modelling in finite element framework. Failure due to combined shear, tensile and compressive stresses are considered through a constitutive damaging contact model that incorporates traction–separation as a function of displacement discontinuity. The modelling method is verified using single and multiple contact analyses of thin mortar layered masonry specimens under shear, tensile and compressive stresses and their combinations. Using this verified method, the failure of thin mortar layered masonry under a range of shear to tension ratios and shear to compression ratios has been examined. Finally, this model is applied to thin bed masonry wallettes for their behaviour under biaxial tension–tension and compression–tension loadings perpendicular and parallel to the bed joints.
Resumo:
The article focuses on how the information seeker makes decisions about relevance. It will employ a novel decision theory based on quantum probabilities. This direction derives from mounting research within the field of cognitive science showing that decision theory based on quantum probabilities is superior to modelling human judgements than standard probability models [2, 1]. By quantum probabilities, we mean decision event space is modelled as vector space rather than the usual Boolean algebra of sets. In this way,incompatible perspectives around a decision can be modelled leading to an interference term which modifies the law of total probability. The interference term is crucial in modifying the probability judgements made by current probabilistic systems so they align better with human judgement. The goal of this article is thus to model the information seeker user as a decision maker. For this purpose, signal detection models will be sketched which are in principle applicable in a wide variety of information seeking scenarios.
Resumo:
Modelling business processes for analysis or redesign usually requires the collaboration of many stakeholders. These stakeholders may be spread across locations or even companies, making co-located collaboration costly and difficult to organize. Modern process modelling technologies support remote collaboration but lack support for visual cues used in co-located collaboration. Previously we presented a prototype 3D virtual world process modelling tool that supports a number of visual cues to facilitate remote collaborative process model creation and validation. However, the added complexity of having to navigate a virtual environment and using an avatar for communication made the tool difficult to use for novice users. We now present an evolved version of the technology that addresses these issues by providing natural user interfaces for non-verbal communication, navigation and model manipulation.
Resumo:
Finite Element modelling of bone fracture fixation systems allows computational investigation of the deformation response of the bone to load. Once validated, these models can be easily adapted to explore changes in design or configuration of a fixator. The deformation of the tissue within the fracture gap determines its healing and is often summarised as the stiffness of the construct. FE models capable of reproducing this behaviour would provide valuable insight into the healing potential of different fixation systems. Current model validation techniques lack depth in 6D load and deformation measurements. Other aspects of the FE model creation such as the definition of interfaces between components have also not been explored. This project investigated the mechanical testing and FE modelling of a bone– plate construct for the determination of stiffness. In depth 6D measurement and analysis of the generated forces, moments and movements showed large out of plane behaviours which had not previously been characterised. Stiffness calculated from the interfragmentary movement was found to be an unsuitable summary parameter as the error propagation is too large. Current FE modelling techniques were applied in compression and torsion mimicking the experimental setup. Compressive stiffness was well replicated, though torsional stiffness was not. The out of plane behaviours prevalent in the experimental work were not replicated in the model. The interfaces between the components were investigated experimentally and through modification to the FE model. Incorporation of the interface modelling techniques into the full construct models had no effect in compression but did act to reduce torsional stiffness bringing it closer to that of the experiment. The interface definitions had no effect on out of plane behaviours, which were still not replicated. Neither current nor novel FE modelling techniques were able to replicate the out of plane behaviours evident in the experimental work. New techniques for modelling loads and boundary conditions need to be developed to mimic the effects of the entire experimental system.
Resumo:
Background: Developing sampling strategies to target biological pests such as insects in stored grain is inherently difficult owing to species biology and behavioural characteristics. The design of robust sampling programmes should be based on an underlying statistical distribution that is sufficiently flexible to capture variations in the spatial distribution of the target species. Results: Comparisons are made of the accuracy of four probability-of-detection sampling models - the negative binomial model,1 the Poisson model,1 the double logarithmic model2 and the compound model3 - for detection of insects over a broad range of insect densities. Although the double log and negative binomial models performed well under specific conditions, it is shown that, of the four models examined, the compound model performed the best over a broad range of insect spatial distributions and densities. In particular, this model predicted well the number of samples required when insect density was high and clumped within experimental storages. Conclusions: This paper reinforces the need for effective sampling programs designed to detect insects over a broad range of spatial distributions. The compound model is robust over a broad range of insect densities and leads to substantial improvement in detection probabilities within highly variable systems such as grain storage.
Resumo:
A demo video showing the BPMVM prototype using several natural user interfaces, such as multi-touch input, full-body tracking and virtual reality.
Resumo:
This paper describes a risk model for estimating the likelihood of collisions at low-exposure railway level crossings, demonstrating the effect that differences in safety integrity can have on the likelihood of a collision. The model facilitates the comparison of safety benefits between level crossings with passive controls (stop or give-way signs) and level crossings that have been hypothetically upgraded with conventional or low-cost warning devices. The scenario presented illustrates how treatment of a cross-section of level crossings with low cost devices can provide a greater safety benefit compared to treatment with conventional warning devices for the same budget.
Resumo:
Keeping exotic plant pests out of our country relies on good border control or quarantine. However with increasing globalization and mobilization some things slip through. Then the back up systems become important. This can include an expensive form of surveillance that purposively targets particular pests. A much wider net is provided by general surveillance, which is assimilated into everyday activities, like farmers checking the health of their crops. In fact farmers and even home gardeners have provided a front line warning system for some pests (eg European wasp) that could otherwise have wreaked havoc. Mathematics is used to model how surveillance works in various situations. Within this virtual world we can play with various surveillance and management strategies to "see" how they would work, or how to make them work better. One of our greatest challenges is estimating some of the input parameters : because the pest hasn't been here before, it's hard to predict how well it might behave: establishing, spreading, and what types of symptoms it might express. So we rely on experts to help us with this. This talk will look at the mathematical, psychological and logical challenges of helping experts to quantify what they think. We show how the subjective Bayesian approach is useful for capturing expert uncertainty, ultimately providing a more complete picture of what they think... And what they don't!
Resumo:
This book provides a general framework for specifying, estimating, and testing time series econometric models. Special emphasis is given to estimation by maximum likelihood, but other methods are also discussed, including quasi-maximum likelihood estimation, generalized method of moments estimation, nonparametric estimation, and estimation by simulation. An important advantage of adopting the principle of maximum likelihood as the unifying framework for the book is that many of the estimators and test statistics proposed in econometrics can be derived within a likelihood framework, thereby providing a coherent vehicle for understanding their properties and interrelationships. In contrast to many existing econometric textbooks, which deal mainly with the theoretical properties of estimators and test statistics through a theorem-proof presentation, this book squarely addresses implementation to provide direct conduits between the theory and applied work.
Resumo:
Currently, recommender systems (RS) have been widely applied in many commercial e-commerce sites to help users deal with the information overload problem. Recommender systems provide personalized recommendations to users and, thus, help in making good decisions about which product to buy from the vast amount of product choices. Many of the current recommender systems are developed for simple and frequently purchased products like books and videos, by using collaborative-filtering and content-based approaches. These approaches are not directly applicable for recommending infrequently purchased products such as cars and houses as it is difficult to collect a large number of ratings data from users for such products. Many of the ecommerce sites for infrequently purchased products are still using basic search-based techniques whereby the products that match with the attributes given in the target user’s query are retrieved and recommended. However, search-based recommenders cannot provide personalized recommendations. For different users, the recommendations will be the same if they provide the same query regardless of any difference in their interest. In this article, a simple user profiling approach is proposed to generate user’s preferences to product attributes (i.e., user profiles) based on user product click stream data. The user profiles can be used to find similarminded users (i.e., neighbours) accurately. Two recommendation approaches are proposed, namely Round- Robin fusion algorithm (CFRRobin) and Collaborative Filtering-based Aggregated Query algorithm (CFAgQuery), to generate personalized recommendations based on the user profiles. Instead of using the target user’s query to search for products as normal search based systems do, the CFRRobin technique uses the attributes of the products in which the target user’s neighbours have shown interest as queries to retrieve relevant products, and then recommends to the target user a list of products by merging and ranking the returned products using the Round Robin method. The CFAgQuery technique uses the attributes of the products that the user’s neighbours have shown interest in to derive an aggregated query, which is then used to retrieve products to recommend to the target user. Experiments conducted on a real e-commerce dataset show that both the proposed techniques CFRRobin and CFAgQuery perform better than the standard Collaborative Filtering and the Basic Search approaches, which are widely applied by the current e-commerce applications.
Resumo:
Travel time prediction has long been the topic of transportation research. But most relevant prediction models in the literature are limited to motorways. Travel time prediction on arterial networks is challenging due to involving traffic signals and significant variability of individual vehicle travel time. The limited availability of traffic data from arterial networks makes travel time prediction even more challenging. Recently, there has been significant interest of exploiting Bluetooth data for travel time estimation. This research analysed the real travel time data collected by the Brisbane City Council using the Bluetooth technology on arterials. Databases, including experienced average daily travel time are created and classified for approximately 8 months. Thereafter, based on data characteristics, Seasonal Auto Regressive Integrated Moving Average (SARIMA) modelling is applied on the database for short-term travel time prediction. The SARMIA model not only takes the previous continuous lags into account, but also uses the values from the same time of previous days for travel time prediction. This is carried out by defining a seasonality coefficient which improves the accuracy of travel time prediction in linear models. The accuracy, robustness and transferability of the model are evaluated through comparing the real and predicted values on three sites within Brisbane network. The results contain the detailed validation for different prediction horizons (5 min to 90 minutes). The model performance is evaluated mainly on congested periods and compared to the naive technique of considering the historical average.