20 resultados para Prediction algorithms

em Universidade do Minho


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tese de Doutoramento Ramo Engenharia Industrial e de Sistemas

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PhD thesis in Bioengineering

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PhD thesis in Biomedical Engineering

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electric Vehicles (EVs) have limited energy storage capacity and the maximum autonomy range is strongly dependent of the driver's behaviour. Due to the fact of that batteries cannot be recharged quickly during a journey, it is essential that a precise range prediction is available to the driver of the EV. With this information, it is possible to check if the desirable destination is achievable without a stop to charge the batteries, or even, if to reach the destination it is necessary to perform an optimized driving (e.g., cutting the air-conditioning, among others EV parameters). The outcome of this research work is the development of an Electric Vehicle Assistant (EVA). This is an application for mobile devices that will help users to take efficient decisions about route planning, charging management and energy efficiency. Therefore, it will contribute to foster EVs adoption as a new paradigm in the transportation sector.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve the paper. The authors would like to thank Dr. Elaine DeBock for reviewing the manuscript.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper aims at developing a collision prediction model for three-leg junctions located in national roads (NR) in Northern Portugal. The focus is to identify factors that contribute for collision type crashes in those locations, mainly factors related to road geometric consistency, since literature is scarce on those, and to research the impact of three modeling methods: generalized estimating equations, random-effects negative binomial models and random-parameters negative binomial models, on the factors of those models. The database used included data published between 2008 and 2010 of 177 three-leg junctions. It was split in three groups of contributing factors which were tested sequentially for each of the adopted models: at first only traffic, then, traffic and the geometric characteristics of the junctions within their area of influence; and, lastly, factors which show the difference between the geometric characteristics of the segments boarding the junctionsâ area of influence and the segment included in that area were added. The choice of the best modeling technique was supported by the result of a cross validation made to ascertain the best model for the three sets of researched contributing factors. The models fitted with random-parameters negative binomial models had the best performance in the process. In the best models obtained for every modeling technique, the characteristics of the road environment, including proxy measures for the geometric consistency, along with traffic volume, contribute significantly to the number of collisions. Both the variables concerning junctions and the various national highway segments in their area of influence, as well as variations from those characteristics concerning roadway segments which border the already mentioned area of influence have proven their relevance and, therefore, there is a rightful need to incorporate the effect of geometric consistency in the three-leg junctions safety studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hospitals are nowadays collecting vast amounts of data related with patient records. All this data hold valuable knowledge that can be used to improve hospital decision making. Data mining techniques aim precisely at the extraction of useful knowledge from raw data. This work describes an implementation of a medical data mining project approach based on the CRISP-DM methodology. Recent real-world data, from 2000 to 2013, were collected from a Portuguese hospital and related with inpatient hospitalization. The goal was to predict generic hospital Length Of Stay based on indicators that are commonly available at the hospitalization process (e.g., gender, age, episode type, medical specialty). At the data preparation stage, the data were cleaned and variables were selected and transformed, leading to 14 inputs. Next, at the modeling stage, a regression approach was adopted, where six learning methods were compared: Average Prediction, Multiple Regression, Decision Tree, Artificial Neural Network ensemble, Support Vector Machine and Random Forest. The best learning model was obtained by the Random Forest method, which presents a high quality coefficient of determination value (0.81). This model was then opened by using a sensitivity analysis procedure that revealed three influential input attributes: the hospital episode type, the physical service where the patient is hospitalized and the associated medical specialty. Such extracted knowledge confirmed that the obtained predictive model is credible and with potential value for supporting decisions of hospital managers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Customer lifetime value (LTV) enables using client characteristics, such as recency, frequency and monetary (RFM) value, to describe the value of a client through time in terms of profitability. We present the concept of LTV applied to telemarketing for improving the return-on-investment, using a recent (from 2008 to 2013) and real case study of bank campaigns to sell long- term deposits. The goal was to benefit from past contacts history to extract additional knowledge. A total of twelve LTV input variables were tested, un- der a forward selection method and using a realistic rolling windows scheme, highlighting the validity of five new LTV features. The results achieved by our LTV data-driven approach using neural networks allowed an improvement up to 4 pp in the Lift cumulative curve for targeting the deposit subscribers when compared with a baseline model (with no history data). Explanatory knowledge was also extracted from the proposed model, revealing two highly relevant LTV features, the last result of the previous campaign to sell the same product and the frequency of past client successes. The obtained results are particularly valuable for contact center companies, which can improve pre- dictive performance without even having to ask for more information to the companies they serve.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traffic Engineering (TE) approaches are increasingly impor- tant in network management to allow an optimized configuration and resource allocation. In link-state routing, the task of setting appropriate weights to the links is both an important and a challenging optimization task. A number of different approaches has been put forward towards this aim, including the successful use of Evolutionary Algorithms (EAs). In this context, this work addresses the evaluation of three distinct EAs, a single and two multi-objective EAs, in two tasks related to weight setting optimization towards optimal intra-domain routing, knowing the network topology and aggregated traffic demands and seeking to mini- mize network congestion. In both tasks, the optimization considers sce- narios where there is a dynamic alteration in the state of the system, in the first considering changes in the traffic demand matrices and in the latter considering the possibility of link failures. The methods will, thus, need to simultaneously optimize for both conditions, the normal and the altered one, following a preventive TE approach towards robust configurations. Since this can be formulated as a bi-objective function, the use of multi-objective EAs, such as SPEA2 and NSGA-II, came nat- urally, being those compared to a single-objective EA. The results show a remarkable behavior of NSGA-II in all proposed tasks scaling well for harder instances, and thus presenting itself as the most promising option for TE in these scenarios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Immune systems have been used in the last years to inspire approaches for several computational problems. This paper focus on behavioural biometric authentication algorithms’ accuracy enhancement by using them more than once and with different thresholds in order to first simulate the protection provided by the skin and then look for known outside entities, like lymphocytes do. The paper describes the principles that support the application of this approach to Keystroke Dynamics, an authentication biometric technology that decides on the legitimacy of a user based on his typing pattern captured on he enters the username and/or the password and, as a proof of concept, the accuracy levels of one keystroke dynamics algorithm when applied to five legitimate users of a system both in the traditional and in the immune inspired approaches are calculated and the obtained results are compared.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to the fact that different injection molding conditions tailor the mechanical response of the thermoplastic material, such effect must be considered earlier in the product development process. The existing approaches implemented in different commercial software solutions are very limited in their capabilities to estimate the influence of processing conditions on the mechanical properties. Thus, the accuracy of predictive simulations could be improved. In this study, we demonstrate how to establish straightforward processing-impact property relationships of talc-filled injection-molded polypropylene disc-shaped parts by assessing the thermomechanical environment (TME). To investigate the relationship between impact properties and the key operative variables (flow rate, melt and mold temperature, and holding pressure), the design of experiments approach was applied to systematically vary the TME of molded samples. The TME is characterized on computer flow simulation outputsanddefined bytwo thermomechanical indices (TMI): the cooling index (CI; associated to the core features) and the thermo-stress index (TSI; related to the skin features). The TMI methodology coupled to an integrated simulation program has been developed as a tool to predict the impact response. The dynamic impact properties (peak force, peak energy, and puncture energy) were evaluated using instrumented falling weight impact tests and were all found to be similarly affected by the imposed TME. The most important molding parameters affecting the impact properties were found to be the processing temperatures (melt andmold). CI revealed greater importance for the impact response than TSI. The developed integrative tool provided truthful predictions for the envisaged impact properties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The identification of new and druggable targets in bacteria is a critical endeavour in pharmaceutical research of novel antibiotics to fight infectious agents. The rapid emergence of resistant bacteria makes today's antibiotics more and more ineffective, consequently increasing the need for new pharmacological targets and novel classes of antibacterial drugs. A new model that combines the singular value decomposition technique with biological filters comprised of a set of protein properties associated with bacterial drug targets and similarity to protein-coding essential genes of E. coli has been developed to predict potential drug targets in the Enterobacteriaceae family [1]. This model identified 99 potential target proteins amongst the studied bacterial family, exhibiting eight different functions that suggest that the disruption of the activities of these proteins is critical for cells. Out of these candidates, one was selected for target confirmation. To find target modulators, receptor-based pharmacophore hypotheses were built and used in the screening of a virtual library of compounds. Postscreening filters were based on physicochemical and topological similarity to known Gram-negative antibiotics and applied to the retrieved compounds. Screening hits passing all filters were docked into the proteins catalytic groove and 15 of the most promising compounds were purchased from their chemical vendors to be experimentally tested in vitro. To the best of our knowledge, this is the first attempt to rationalize the search of compounds to probe the relevance of this candidate as a new pharmacological target.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optimization with stochastic algorithms has become a relevant research field. Due to its stochastic nature, its assessment is not straightforward and involves integrating accuracy and precision. Performance profiles for the mean do not show the trade-off between accuracy and precision, and parametric stochastic profiles require strong distributional assumptions and are limited to the mean performance for a large number of runs. In this work, bootstrap performance profiles are used to compare stochastic algorithms for different statistics. This technique allows the estimation of the sampling distribution of almost any statistic even with small samples. Multiple comparison profiles are presented for more than two algorithms. The advantages and drawbacks of each assessment methodology are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The research aimed to establish tyre-road noise models by using a Data Mining approach that allowed to build a predictive model and assess the importance of the tested input variables. The data modelling took into account three learning algorithms and three metrics to define the best predictive model. The variables tested included basic properties of pavement surfaces, macrotexture, megatexture, and uneven- ness and, for the first time, damping. Also, the importance of those variables was measured by using a sensitivity analysis procedure. Two types of models were set: one with basic variables and another with complex variables, such as megatexture and damping, all as a function of vehicles speed. More detailed models were additionally set by the speed level. As a result, several models with very good tyre-road noise predictive capacity were achieved. The most relevant variables were Speed, Temperature, Aggregate size, Mean Profile Depth, and Damping, which had the highest importance, even though influenced by speed. Megatexture and IRI had the lowest importance. The applicability of the models developed in this work is relevant for trucks tyre-noise prediction, represented by the AVON V4 test tyre, at the early stage of road pavements use. Therefore, the obtained models are highly useful for the design of pavements and for noise prediction by road authorities and contractors.