932 resultados para Imputation model approach
Resumo:
Extensible Markup Language ( XML) has emerged as a medium for interoperability over the Internet. As the number of documents published in the form of XML is increasing, there is a need for selective dissemination of XML documents based on user interests. In the proposed technique, a combination of Adaptive Genetic Algorithms and multi class Support Vector Machine ( SVM) is used to learn a user model. Based on the feedback from the users, the system automatically adapts to the user's preference and interests. The user model and a similarity metric are used for selective dissemination of a continuous stream of XML documents. Experimental evaluations performed over a wide range of XML documents, indicate that the proposed approach significantly improves the performance of the selective dissemination task, with respect to accuracy and efficiency.
Resumo:
Research on business growth has been criticized for methodological weaknesses. We present a mediated moderation growth model as a new methodological approach. We hypothesized that small business managers' age negatively affects business growth through focus on opportunities. We sampled 201 small business managers and obtained firm performance data over 5 years, resulting in 836 observations. Growth modeling showed systematic differences in firm performance trajectories. These differences could be explained by modeling focus on opportunities as a mediator of the relationship between small business managers' age and business growth. The study illustrates how mediation models can be tested using growth modeling.
Resumo:
Because of the bottlenecking operations in a complex coal rail system, millions of dollars are costed by mining companies. To handle this issue, this paper investigates a real-world coal rail system and aims to optimise the coal railing operations under constraints of limited resources (e.g., limited number of locomotives and wagons). In the literature, most studies considered the train scheduling problem on a single-track railway network to be strongly NP-hard and thus developed metaheuristics as the main solution methods. In this paper, a new mathematical programming model is formulated and coded by optimization programming language based on a constraint programming (CP) approach. A new depth-first-search technique is developed and embedded inside the CP model to obtain the optimised coal railing timetable efficiently. Computational experiments demonstrate that high-quality solutions are obtainable in industry-scale applications. To provide insightful decisions, sensitivity analysis is conducted in terms of different scenarios and specific criteria. Keywords Train scheduling · Rail transportation · Coal mining · Constraint programming
Resumo:
In 2004, the Faculty of Health Sciences at La Trobe University in Victoria, Australia, introduced a new, final-year subject ‘Interdisciplinary Professional Practice’. The subject is taught to all students enrolled in the 11 allied health and human service disciplines at La Trobe University across metropolitan and rural campuses. The delivery is online to overcome timetabling barriers and to provide time and geographic flexibility. The subject is presented using an enquiry-based learning model. Students are exposed to the concepts of interdisciplinary teamwork through shared learning across professional boundaries to enable a collaborative workforce. An outline of the background development and design of this subject, and its implementation and content areas is presented. A discussion of relevant literature and an analysis of the subject evaluations and focus groups that have guided subject development to enhance student learning over eight cohorts is included.
Resumo:
This paper presents an algorithm for solid model reconstruction from 2D sectional views based on volume-based approach. None of the existing work in automatic reconstruction from 2D orthographic views have addressed sectional views in detail. It is believed that the volume-based approach is better suited to handle different types of sectional views. The volume-based approach constructs the 3D solid by a boolean combination of elementary solids. The elementary solids are formed by sweep operation on loops identified in the input views. The only adjustment to be made for the presence of sectional views is in the identification of loops that would form the elemental solids. In the algorithm, the conventions of engineering drawing for sectional views, are used to identify the loops correctly. The algorithm is simple and intuitive in nature. Results have been obtained for full sections, offset sections and half sections. Future work will address other types of sectional views such as removed and revolved sections and broken-out sections. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
In Finland, there is a desperate need for flexible, reliable and functional multi-e-learning settings for pupils aged 11-13. Southern Finland has several ongoing e-learning projects, but none that develop a multiple setting, with learning and teaching occurring between more than two schools. In 2006, internet connections were not broadband and data transfer was mainly audio data. Connections and technical problems occurred, which were an obstacle to multi-e-learning. Internet connections today enable web-based learning in major parts of
Lapland and by 2015, broadband will reach even the remotest villages up north. Therefore, it is important to research the possibilities of multi-e-learning and to build collaborative, learner-centred, versatile network models for primary school-aged pupils. The resulting model will facilitate distance learning to extend education to rural, sparsely populated areas, and it will give a model of using mobile devices in language portfolios. This will promote regional equality and prevent exclusion. Working with portfolios provides the opportunity to develop mobility from a pedagogical point of view. It is important to study the pros and cons of mobile devices in producing artefacts on portfolios in e-learning and language learning settings.
The current study represents a design-based research approach. The design research approach includes two important aspects concerning the current research: ‘a teacher as researcher’ aspect, which means there is the possibility to be strongly involved in developing processes and an obstacle-aspect, which means that problems while developing, are seen as a
promoter in evolving the designed model, as apposed to negative results.
Resumo:
In Finland, there is a desperate need for flexible, reliable and functional multi-e-learning settings for pupils aged 11-13. Southern Finland has several ongoing e-learning projects, but none that develop a multiple setting, with learning and teaching occurring between more than two schools. In 2006, internet connections were not broadband and data transfer was mainly audio data. Connections and technical problems occurred, which were an obstacle to multi-e-learning. Internet connections today enable web-based learning in major parts of Lapland and by 2015, broadband will reach even the remotest villages up north. Therefore, it is important to research the possibilities of multi-e-learning and to build collaborative, learner-centred, versatile network models for primary school-aged pupils. The resulting model will facilitate distance learning to extend education to rural, sparsely populated areas, and it will give a model of using mobile devices in language portfolios. This will promote regional equality and prevent exclusion. Working with portfolios provides the opportunity to develop mobility from a pedagogical point of view. It is important to study the pros and cons of mobile devices in producing artefacts on portfolios in e-learning and language learning settings. The current study represents a design-based research approach. The design research approach includes two important aspects concerning the current research: ‘a teacher as researcher’ aspect, which means there is the possibility to be strongly involved in developing processes and an obstacle-aspect, which means that problems while developing, are seen as a promoter in evolving the designed model, as apposed to negative results.
Resumo:
Masonry strength is dependent upon characteristics of the masonry unit,the mortar and the bond between them. Empirical formulae as well as analytical and finite element (FE) models have been developed to predict structural behaviour of masonry. This paper is focused on developing a three dimensional non-linear FE model based on micro-modelling approach to predict masonry prism compressive strength and crack pattern. The proposed FE model uses multi-linear stress-strain relationships to model the non-linear behaviour of solid masonry unit and the mortar. Willam-Warnke's five parameter failure theory developed for modelling the tri-axial behaviour of concrete has been adopted to model the failure of masonry materials. The post failure regime has been modelled by applying orthotropic constitutive equations based on the smeared crack approach. Compressive strength of the masonry prism predicted by the proposed FE model has been compared with experimental values as well as the values predicted by other failure theories and Eurocode formula. The crack pattern predicted by the FE model shows vertical splitting cracks in the prism. The FE model predicts the ultimate failure compressive stress close to 85 of the mean experimental compressive strength value.
Resumo:
An approach, starting with the bubble formation model of Khurana and Khumar, has been presented, which is found to be reasonably applicable to the formation of both bubbles and drops from single submerged nozzles. The model treats both the phenomena jointly as the formation of a dispersed phase entity resulting from injection, whose size depends upon operating parameters and physical properties.
Resumo:
We present a new computationally efficient method for large-scale polypeptide folding using coarse-grained elastic networks and gradient-based continuous optimization techniques. The folding is governed by minimization of energy based on Miyazawa–Jernigan contact potentials. Using this method we are able to substantially reduce the computation time on ordinary desktop computers for simulation of polypeptide folding starting from a fully unfolded state. We compare our results with available native state structures from Protein Data Bank (PDB) for a few de-novo proteins and two natural proteins, Ubiquitin and Lysozyme. Based on our simulations we are able to draw the energy landscape for a small de-novo protein, Chignolin. We also use two well known protein structure prediction software, MODELLER and GROMACS to compare our results. In the end, we show how a modification of normal elastic network model can lead to higher accuracy and lower time required for simulation.
Resumo:
Mikael Juselius’ doctoral dissertation covers a range of significant issues in modern macroeconomics by empirically testing a number of important theoretical hypotheses. The first essay presents indirect evidence within the framework of the cointegrated VAR model on the elasticity of substitution between capital and labor by using Finnish manufacturing data. Instead of estimating the elasticity of substitution by using the first order conditions, he develops a new approach that utilizes a CES production function in a model with a 3-stage decision process: investment in the long run, wage bargaining in the medium run and price and employment decisions in the short run. He estimates the elasticity of substitution to be below one. The second essay tests the restrictions implied by the core equations of the New Keynesian Model (NKM) in a vector autoregressive model (VAR) by using both Euro area and U.S. data. Both the new Keynesian Phillips curve and the aggregate demand curve are estimated and tested. The restrictions implied by the core equations of the NKM are rejected on both U.S. and Euro area data. These results are important for further research. The third essay is methodologically similar to essay 2, but it concentrates on Finnish macro data by adopting a theoretical framework of an open economy. Juselius’ results suggests that the open economy NKM framework is too stylized to provide an adequate explanation for Finnish inflation. The final essay provides a macroeconometric model of Finnish inflation and associated explanatory variables and it estimates the relative importance of different inflation theories. His main finding is that Finnish inflation is primarily determined by excess demand in the product market and by changes in the long-term interest rate. This study is part of the research agenda carried out by the Research Unit of Economic Structure and Growth (RUESG). The aim of RUESG it to conduct theoretical and empirical research with respect to important issues in industrial economics, real option theory, game theory, organization theory, theory of financial systems as well as to study problems in labor markets, macroeconomics, natural resources, taxation and time series econometrics. RUESG was established at the beginning of 1995 and is one of the National Centers of Excellence in research selected by the Academy of Finland. It is financed jointly by the Academy of Finland, the University of Helsinki, the Yrjö Jahnsson Foundation, Bank of Finland and the Nokia Group. This support is gratefully acknowledged.
Resumo:
In this study, it is argued that the view on alliance creation presented in the current academic literature is limited, and that using a learning approach helps to explain the dynamic nature of alliance creation. The cases in this study suggest that a wealth of inefficiency elements can be found in alliance creation. These elements can further be divided into categories, which help explain the dynamics of alliance creation. The categories –combined with two models brought forward by the study– suggest that inefficiency can be avoided through learning during the creation process. Some elements are especially central to this argumentation. First, the elements related to the clarity and acceptance of the strategy of the company, the potential lack of an alliance strategy and the elements related to changes in the strategic context. Second, the elements related to the length of the alliance creation processes and the problems a long process entails. It is further suggested that the different inefficiency elements may create a situation, where the alliance creation process is –sequentially and successfully– followed to the end, but where the different inefficiencies create a situation where the results are not aligned with the strategic intent. The proposed solution is to monitor and assess the risk for inefficiency elements during the alliance creation process. The learning, which occurs during the alliance creation process as a result of the monitoring, can then lead to realignments in the process. This study proposes a model to mitigate the risk related to the inefficiencies. The model emphasizes creating an understanding of the other alliance partner’s business, creating a shared vision, using pilot cooperation and building trust within the process. An analytical approach to assessing the benefits of trust is also central in this view. The alliance creation approach suggested by this study, which emphasizes trust and pilot cooperation, is further critically reviewed against contracting as a way to create alliances.
Resumo:
A nonlinear adaptive system theoretic approach is presented in this paper for effective treatment of infectious diseases that affect various organs of the human body. The generic model used does not represent any specific disease. However, it mimics the generic immunological dynamics of the human body under pathological attack, including the response to external drugs. From a system theoretic point of view, drugs can be interpreted as control inputs. Assuming a set of nominal parameters in the mathematical model, first a nonlinear controller is designed based on the principle of dynamic inversion. This treatment strategy was found to be effective in completely curing "nominal patients". However, in some cases it is ineffective in curing "realistic patients". This leads to serious (sometimes fatal) damage to the affected organ. To make the drug dosage design more effective, a model-following neuro-adaptive control design is carried out using neural networks, which are trained (adapted) online. From simulation studies, this adaptive controller is found to be effective in killing the invading microbes and healing the damaged organ even in the presence of parameter uncertainties and continuing pathogen attack.
Resumo:
The increasing use of 3D modeling of Human Face in Face Recognition systems, User Interfaces, Graphics, Gaming and the like has made it an area of active study. Majority of the 3D sensors rely on color coded light projection for 3D estimation. Such systems fail to generate any response in regions covered by Facial Hair (like beard, mustache), and hence generate holes in the model which have to be filled manually later on. We propose the use of wavelet transform based analysis to extract the 3D model of Human Faces from a sinusoidal white light fringe projected image. Our method requires only a single image as input. The method is robust to texture variations on the face due to space-frequency localization property of the wavelet transform. It can generate models to pixel level refinement as the phase is estimated for each pixel by a continuous wavelet transform. In cases of sparse Facial Hair, the shape distortions due to hairs can be filtered out, yielding an estimate for the underlying face. We use a low-pass filtering approach to estimate the face texture from the same image. We demonstrate the method on several Human Faces both with and without Facial Hairs. Unseen views of the face are generated by texture mapping on different rotations of the obtained 3D structure. To the best of our knowledge, this is the first attempt to estimate 3D for Human Faces in presence of Facial hair structures like beard and mustache without generating holes in those areas.
Resumo:
In this paper we develop and numerically explore the modeling heuristic of using saturation attempt probabilities as state dependent attempt probabilities in an IEEE 802.11e infrastructure network carrying packet telephone calls and TCP controlled file downloads, using Enhanced Distributed Channel Access (EDCA). We build upon the fixed point analysis and performance insights in [1]. When there are a certain number of nodes of each class contending for the channel (i.e., have nonempty queues), then their attempt probabilities are taken to be those obtained from saturation analysis for that number of nodes. Then we model the system queue dynamics at the network nodes. With the proposed heuristic, the system evolution at channel slot boundaries becomes a Markov renewal process, and regenerative analysis yields the desired performance measures.The results obtained from this approach match well with ns2 simulations. We find that, with the default IEEE 802.11e EDCA parameters for AC 1 and AC 3, the voice call capacity decreases if even one file download is initiated by some station. Subsequently, reducing the voice calls increases the file download capacity almost linearly (by 1/3 Mbps per voice call for the 11 Mbps PHY).