22 resultados para computer prediction
em Universidade do Minho
Resumo:
Due to the fact that different injection molding conditions tailor the mechanical response of the thermoplastic material, such effect must be considered earlier in the product development process. The existing approaches implemented in different commercial software solutions are very limited in their capabilities to estimate the influence of processing conditions on the mechanical properties. Thus, the accuracy of predictive simulations could be improved. In this study, we demonstrate how to establish straightforward processing-impact property relationships of talc-filled injection-molded polypropylene disc-shaped parts by assessing the thermomechanical environment (TME). To investigate the relationship between impact properties and the key operative variables (flow rate, melt and mold temperature, and holding pressure), the design of experiments approach was applied to systematically vary the TME of molded samples. The TME is characterized on computer flow simulation outputsanddefined bytwo thermomechanical indices (TMI): the cooling index (CI; associated to the core features) and the thermo-stress index (TSI; related to the skin features). The TMI methodology coupled to an integrated simulation program has been developed as a tool to predict the impact response. The dynamic impact properties (peak force, peak energy, and puncture energy) were evaluated using instrumented falling weight impact tests and were all found to be similarly affected by the imposed TME. The most important molding parameters affecting the impact properties were found to be the processing temperatures (melt andmold). CI revealed greater importance for the impact response than TSI. The developed integrative tool provided truthful predictions for the envisaged impact properties.
Resumo:
Electric Vehicles (EVs) have limited energy storage capacity and the maximum autonomy range is strongly dependent of the driver's behaviour. Due to the fact of that batteries cannot be recharged quickly during a journey, it is essential that a precise range prediction is available to the driver of the EV. With this information, it is possible to check if the desirable destination is achievable without a stop to charge the batteries, or even, if to reach the destination it is necessary to perform an optimized driving (e.g., cutting the air-conditioning, among others EV parameters). The outcome of this research work is the development of an Electric Vehicle Assistant (EVA). This is an application for mobile devices that will help users to take efficient decisions about route planning, charging management and energy efficiency. Therefore, it will contribute to foster EVs adoption as a new paradigm in the transportation sector.
Resumo:
This paper aims at developing a collision prediction model for three-leg junctions located in national roads (NR) in Northern Portugal. The focus is to identify factors that contribute for collision type crashes in those locations, mainly factors related to road geometric consistency, since literature is scarce on those, and to research the impact of three modeling methods: generalized estimating equations, random-effects negative binomial models and random-parameters negative binomial models, on the factors of those models. The database used included data published between 2008 and 2010 of 177 three-leg junctions. It was split in three groups of contributing factors which were tested sequentially for each of the adopted models: at first only traffic, then, traffic and the geometric characteristics of the junctions within their area of influence; and, lastly, factors which show the difference between the geometric characteristics of the segments boarding the junctionsâ area of influence and the segment included in that area were added. The choice of the best modeling technique was supported by the result of a cross validation made to ascertain the best model for the three sets of researched contributing factors. The models fitted with random-parameters negative binomial models had the best performance in the process. In the best models obtained for every modeling technique, the characteristics of the road environment, including proxy measures for the geometric consistency, along with traffic volume, contribute significantly to the number of collisions. Both the variables concerning junctions and the various national highway segments in their area of influence, as well as variations from those characteristics concerning roadway segments which border the already mentioned area of influence have proven their relevance and, therefore, there is a rightful need to incorporate the effect of geometric consistency in the three-leg junctions safety studies.
Resumo:
Hospitals are nowadays collecting vast amounts of data related with patient records. All this data hold valuable knowledge that can be used to improve hospital decision making. Data mining techniques aim precisely at the extraction of useful knowledge from raw data. This work describes an implementation of a medical data mining project approach based on the CRISP-DM methodology. Recent real-world data, from 2000 to 2013, were collected from a Portuguese hospital and related with inpatient hospitalization. The goal was to predict generic hospital Length Of Stay based on indicators that are commonly available at the hospitalization process (e.g., gender, age, episode type, medical specialty). At the data preparation stage, the data were cleaned and variables were selected and transformed, leading to 14 inputs. Next, at the modeling stage, a regression approach was adopted, where six learning methods were compared: Average Prediction, Multiple Regression, Decision Tree, Artificial Neural Network ensemble, Support Vector Machine and Random Forest. The best learning model was obtained by the Random Forest method, which presents a high quality coefficient of determination value (0.81). This model was then opened by using a sensitivity analysis procedure that revealed three influential input attributes: the hospital episode type, the physical service where the patient is hospitalized and the associated medical specialty. Such extracted knowledge confirmed that the obtained predictive model is credible and with potential value for supporting decisions of hospital managers.
Resumo:
Recent research is showing that the addition of Recycled Steel Fibres (RSF) from wasted tyres can decrease significantly the brittle behaviour of cement based materials, by improving its toughness and post-cracking resistance. In this sense, Recycled Steel Fibre Reinforced Concrete (RSFRC) seems to have the potential to constitute a sustainable material for structural and non-structural applications. To assess this potential, experimental and numerical research was performed on the use of RSFRC in elements failing in bending and in beams failing in shear. The values of the fracture mode I parameters of the developed RSFRC were determined by performing inverse analysis with test results obtained in three point notched beam bending tests. To assess the possibility of using RSF as shear reinforcement in Reinforced Concrete (RC) beams, three point bending tests were executed with three series of RSFRC beams flexurally reinforced with a relatively high reinforcement ratio of longitudinal steel bars in order to assure shear failure for all the tested beams. By performing material nonlinear simulations with a computer program based on the finite element method (FEM), the applicability of the fracture mode I crack constitutive law derived from the inverse analysis is assessed for the prediction of the behaviour of these beams. The performance of the formulation proposed by RILEM TC 162 TDF and CEB-FIP 2010 for the prediction of the shear resistance of fibre reinforced concrete elements was also evaluated.
Resumo:
Customer lifetime value (LTV) enables using client characteristics, such as recency, frequency and monetary (RFM) value, to describe the value of a client through time in terms of profitability. We present the concept of LTV applied to telemarketing for improving the return-on-investment, using a recent (from 2008 to 2013) and real case study of bank campaigns to sell long- term deposits. The goal was to benefit from past contacts history to extract additional knowledge. A total of twelve LTV input variables were tested, un- der a forward selection method and using a realistic rolling windows scheme, highlighting the validity of five new LTV features. The results achieved by our LTV data-driven approach using neural networks allowed an improvement up to 4 pp in the Lift cumulative curve for targeting the deposit subscribers when compared with a baseline model (with no history data). Explanatory knowledge was also extracted from the proposed model, revealing two highly relevant LTV features, the last result of the previous campaign to sell the same product and the frequency of past client successes. The obtained results are particularly valuable for contact center companies, which can improve pre- dictive performance without even having to ask for more information to the companies they serve.
Resumo:
Hand gesture recognition for human computer interaction, being a natural way of human computer interaction, is an area of active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them to convey information or for device control. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. In this study we try to identify hand features that, isolated, respond better in various situations in human-computer interaction. The extracted features are used to train a set of classifiers with the help of RapidMiner in order to find the best learner. A dataset with our own gesture vocabulary consisted of 10 gestures, recorded from 20 users was created for later processing. Experimental results show that the radial signature and the centroid distance are the features that when used separately obtain better results, with an accuracy of 91% and 90,1% respectively obtained with a Neural Network classifier. These to methods have also the advantage of being simple in terms of computational complexity, which make them good candidates for real-time hand gesture recognition.
Resumo:
"Lecture notes in computational vision and biomechanics series, ISSN 2212-9391, vol. 19"
Resumo:
Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for human-computer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of visionbased interaction systems could be the same for all applications and thus facilitate the implementation. For hand posture recognition, a SVM (Support Vector Machine) model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM (Hidden Markov Model) model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications. To validate the proposed framework two applications were implemented. The first one is a real-time system able to interpret the Portuguese Sign Language. The second one is an online system able to help a robotic soccer game referee judge a game in real time.
Resumo:
Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for humancomputer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of vision-based interaction systems can be the same for all applications and thus facilitate the implementation. In order to test the proposed solutions, three prototypes were implemented. For hand posture recognition, a SVM model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications.
Resumo:
"Lecture notes in computer science series, ISSN 0302-9743, vol. 9273"
Resumo:
Forming suitable learning groups is one of the factors that determine the efficiency of collaborative learning activities. However, only a few studies were carried out to address this problem in the mobile learning environments. In this paper, we propose a new approach for an automatic, customized, and dynamic group formation in Mobile Computer Supported Collaborative Learning (MCSCL) contexts. The proposed solution is based on the combination of three types of grouping criteria: learner’s personal characteristics, learner’s behaviours, and context information. The instructors can freely select the type, the number, and the weight of grouping criteria, together with other settings such as the number, the size, and the type of learning groups (homogeneous or heterogeneous). Apart from a grouping mechanism, the proposed approach represents a flexible tool to control each learner, and to manage the learning processes from the beginning to the end of collaborative learning activities. In order to evaluate the quality of the implemented group formation algorithm, we compare its Average Intra-cluster Distance (AID) with the one of a random group formation method. The results show a higher effectiveness of the proposed algorithm in forming homogenous and heterogeneous groups compared to the random method.
Resumo:
The identification of new and druggable targets in bacteria is a critical endeavour in pharmaceutical research of novel antibiotics to fight infectious agents. The rapid emergence of resistant bacteria makes today's antibiotics more and more ineffective, consequently increasing the need for new pharmacological targets and novel classes of antibacterial drugs. A new model that combines the singular value decomposition technique with biological filters comprised of a set of protein properties associated with bacterial drug targets and similarity to protein-coding essential genes of E. coli has been developed to predict potential drug targets in the Enterobacteriaceae family [1]. This model identified 99 potential target proteins amongst the studied bacterial family, exhibiting eight different functions that suggest that the disruption of the activities of these proteins is critical for cells. Out of these candidates, one was selected for target confirmation. To find target modulators, receptor-based pharmacophore hypotheses were built and used in the screening of a virtual library of compounds. Postscreening filters were based on physicochemical and topological similarity to known Gram-negative antibiotics and applied to the retrieved compounds. Screening hits passing all filters were docked into the proteins catalytic groove and 15 of the most promising compounds were purchased from their chemical vendors to be experimentally tested in vitro. To the best of our knowledge, this is the first attempt to rationalize the search of compounds to probe the relevance of this candidate as a new pharmacological target.
Resumo:
Currently, the quality of the Indonesian national road network is inadequate due to several constraints, including overcapacity and overloaded trucks. The high deterioration rate of the road infrastructure in developing countries along with major budgetary restrictions and high growth in traffic have led to an emerging need for improving the performance of the highway maintenance system. However, the high number of intervening factors and their complex effects require advanced tools to successfully solve this problem. The high learning capabilities of Data Mining (DM) are a powerful solution to this problem. In the past, these tools have been successfully applied to solve complex and multi-dimensional problems in various scientific fields. Therefore, it is expected that DM can be used to analyze the large amount of data regarding the pavement and traffic, identify the relationship between variables, and provide information regarding the prediction of the data. In this paper, we present a new approach to predict the International Roughness Index (IRI) of pavement based on DM techniques. DM was used to analyze the initial IRI data, including age, Equivalent Single Axle Load (ESAL), crack, potholes, rutting, and long cracks. This model was developed and verified using data from an Integrated Indonesia Road Management System (IIRMS) that was measured with the National Association of Australian State Road Authorities (NAASRA) roughness meter. The results of the proposed approach are compared with the IIRMS analytical model adapted to the IRI, and the advantages of the new approach are highlighted. We show that the novel data-driven model is able to learn (with high accuracy) the complex relationships between the IRI and the contributing factors of overloaded trucks