996 resultados para word prediction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Protein adsorption at solid-liquid interfaces is critical to many applications, including biomaterials, protein microarrays and lab-on-a-chip devices. Despite this general interest, and a large amount of research in the last half a century, protein adsorption cannot be predicted with an engineering level, design-orientated accuracy. Here we describe a Biomolecular Adsorption Database (BAD), freely available online, which archives the published protein adsorption data. Piecewise linear regression with breakpoint applied to the data in the BAD suggests that the input variables to protein adsorption, i.e., protein concentration in solution; protein descriptors derived from primary structure (number of residues, global protein hydrophobicity and range of amino acid hydrophobicity, isoelectric point); surface descriptors (contact angle); and fluid environment descriptors (pH, ionic strength), correlate well with the output variable-the protein concentration on the surface. Furthermore, neural network analysis revealed that the size of the BAD makes it sufficiently representative, with a neural network-based predictive error of 5% or less. Interestingly, a consistently better fit is obtained if the BAD is divided in two separate sub-sets representing protein adsorption on hydrophilic and hydrophobic surfaces, respectively. Based on these findings, selected entries from the BAD have been used to construct neural network-based estimation routines, which predict the amount of adsorbed protein, the thickness of the adsorbed layer and the surface tension of the protein-covered surface. While the BAD is of general interest, the prediction of the thickness and the surface tension of the protein-covered layers are of particular relevance to the design of microfluidics devices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper explores the literature and analyses the different uses and understandings of the word “design” in Portuguese colonised countries, using Brazil as the main example. It investigates the relationship between the linguistic existence of terms to define and describe “design” as an activity and field, and the roles and perceptions of Design by the general society. It also addresses the effects that the lack of a proper translation causes on the local community from a cultural point of view. The current perception of Design in Portuguese colonies is associated to two main aspects: linguistic and historical. Both of them differentiate the countries taken into consideration from other countries that have a different background. The changes associated to the meaning of “design” throughout the years, caused a great impact on the perceptions that people have about Design. On the other hand, the development of Design has also influenced the changes on the meaning of the term, as a result of the legacy from the colonisation period and also as a characteristic of the Portuguese language. Design has developed and reached a level of excellence in Portuguese colonised countries that competes with the most traditional Design cultures in the world. However, this level of Design is enmeshed into an elite belonging to universities and specialised markets, therefore Design is not democratised. The ultimate aim of this study is to promote discussions on how to make the discourse surrounding this area more accessible to people from non-English speaking countries that do not have the word “design” in their local language.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most standard algorithms for prediction with expert advice depend on a parameter called the learning rate. This learning rate needs to be large enough to fit the data well, but small enough to prevent overfitting. For the exponential weights algorithm, a sequence of prior work has established theoretical guarantees for higher and higher data-dependent tunings of the learning rate, which allow for increasingly aggressive learning. But in practice such theoretical tunings often still perform worse (as measured by their regret) than ad hoc tuning with an even higher learning rate. To close the gap between theory and practice we introduce an approach to learn the learning rate. Up to a factor that is at most (poly)logarithmic in the number of experts and the inverse of the learning rate, our method performs as well as if we would know the empirically best learning rate from a large range that includes both conservative small values and values that are much higher than those for which formal guarantees were previously available. Our method employs a grid of learning rates, yet runs in linear time regardless of the size of the grid.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Changing environments pose a serious problem to current robotic systems aiming at long term operation under varying seasons or local weather conditions. This paper is built on our previous work where we propose to learn to predict the changes in an environment. Our key insight is that the occurring scene changes are in part systematic, repeatable and therefore predictable. The goal of our work is to support existing approaches to place recognition by learning how the visual appearance of an environment changes over time and by using this learned knowledge to predict its appearance under different environmental conditions. We describe the general idea of appearance change prediction (ACP) and investigate properties of our novel implementation based on vocabularies of superpixels (SP-ACP). Our previous work showed that the proposed approach significantly improves the performance of SeqSLAM and BRIEF-Gist for place recognition on a subset of the Nordland dataset under extremely different environmental conditions in summer and winter. This paper deepens the understanding of the proposed SP-ACP system and evaluates the influence of its parameters. We present the results of a large-scale experiment on the complete 10 h Nordland dataset and appearance change predictions between different combinations of seasons.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays, demand for automated Gas metal arc welding (GMAW) is growing and consequently need for intelligent systems is increased to ensure the accuracy of the procedure. To date, welding pool geometry has been the most used factor in quality assessment of intelligent welding systems. But, it has recently been found that Mahalanobis Distance (MD) not only can be used for this purpose but also is more efficient. In the present paper, Artificial Neural Networks (ANN) has been used for prediction of MD parameter. However, advantages and disadvantages of other methods have been discussed. The Levenberg–Marquardt algorithm was found to be the most effective algorithm for GMAW process. It is known that the number of neurons plays an important role in optimal network design. In this work, using trial and error method, it has been found that 30 is the optimal number of neurons. The model has been investigated with different number of layers in Multilayer Perceptron (MLP) architecture and has been shown that for the aim of this work the optimal result is obtained when using MLP with one layer. Robustness of the system has been evaluated by adding noise into the input data and studying the effect of the noise in prediction capability of the network. The experiments for this study were conducted in an automated GMAW setup that was integrated with data acquisition system and prepared in a laboratory for welding of steel plate with 12 mm in thickness. The accuracy of the network was evaluated by Root Mean Squared (RMS) error between the measured and the estimated values. The low error value (about 0.008) reflects the good accuracy of the model. Also the comparison of the predicted results by ANN and the test data set showed very good agreement that reveals the predictive power of the model. Therefore, the ANN model offered in here for GMA welding process can be used effectively for prediction goals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Increased longevity and the need to fund living and care expenses across late old age, greater proportions of blended and culturally diverse families and concerns about the increasing possibility of contestation of wills highlight the importance of understanding current will making practices and intentions. Yet, there is no current national data on the prevalence of wills, intended beneficiaries, the principles and practices surrounding will making and the patterns and outcomes of contestation. This project sought to address this gap. This report summarises the results of a four year program of research examining will making and will contestation in Australia. The project was funded by the Australian Research Council (LP10200891) in conjunction with seven Public Trustee Organisations across Australia. The interdisciplinary research team with expertise in social science, social work, law and social policy are from The University of Queensland, Queensland University of Technology and Victoria University. The project comprised five research studies: a national prevalence survey, a judicial case review, a review of Public Trustee files, an online survey of will drafters and in-depth interviews with key groups of interest. The report outlines key findings. On the basis of the evidence provided recommendations are presented to support the achievement of these policy goals: increasing will making in the Australian population, ensuring that the wills of those Australians who have taken this step reflect their current situation and intentions, and reducing will contestation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article presents and evaluates a model to automatically derive word association networks from text corpora. Two aspects were evaluated: To what degree can corpus-based word association networks (CANs) approximate human word association networks with respect to (1) their ability to quantitatively predict word associations and (2) their structural network characteristics. Word association networks are the basis of the human mental lexicon. However, extracting such networks from human subjects is laborious, time consuming and thus necessarily limited in relation to the breadth of human vocabulary. Automatic derivation of word associations from text corpora would address these limitations. In both evaluations corpus-based processing provided vector representations for words. These representations were then employed to derive CANs using two measures: (1) the well known cosine metric, which is a symmetric measure, and (2) a new asymmetric measure computed from orthogonal vector projections. For both evaluations, the full set of 4068 free association networks (FANs) from the University of South Florida word association norms were used as baseline human data. Two corpus based models were benchmarked for comparison: a latent topic model and latent semantic analysis (LSA). We observed that CANs constructed using the asymmetric measure were slightly less effective than the topic model in quantitatively predicting free associates, and slightly better than LSA. The structural networks analysis revealed that CANs do approximate the FANs to an encouraging degree.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability to estimate the expected Remaining Useful Life (RUL) is critical to reduce maintenance costs, operational downtime and safety hazards. In most industries, reliability analysis is based on the Reliability Centred Maintenance (RCM) and lifetime distribution models. In these models, the lifetime of an asset is estimated using failure time data; however, statistically sufficient failure time data are often difficult to attain in practice due to the fixed time-based replacement and the small population of identical assets. When condition indicator data are available in addition to failure time data, one of the alternate approaches to the traditional reliability models is the Condition-Based Maintenance (CBM). The covariate-based hazard modelling is one of CBM approaches. There are a number of covariate-based hazard models; however, little study has been conducted to evaluate the performance of these models in asset life prediction using various condition indicators and data availability. This paper reviews two covariate-based hazard models, Proportional Hazard Model (PHM) and Proportional Covariate Model (PCM). To assess these models’ performance, the expected RUL is compared to the actual RUL. Outcomes demonstrate that both models achieve convincingly good results in RUL prediction; however, PCM has smaller absolute prediction error. In addition, PHM shows over-smoothing tendency compared to PCM in sudden changes of condition data. Moreover, the case studies show PCM is not being biased in the case of small sample size.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Drying of food materials offers a significant increase in the shelf life of food materials, along with the modification of quality attributes due to simultaneous heat and mass transfer. Shrinkage and variations in porosity are the common micro and microstructural changes that take place during the drying of mostly the food materials. Although extensive research has been carried out on the prediction of shrinkage and porosity over the time of drying, no single model exists which consider both material properties and process condition in the same model. In this study, an attempt has been made to develop and validate shrinkage and porosity models of food materials during drying considering both process parameters and sample properties. The stored energy within the sample, elastic potential energy, glass transition temperature and physical properties of the sample such as initial porosity, particle density, bulk density and moisture content have been taken into consideration. Physical properties and validation have been made by using a universal testing machine ( Instron 2kN), a profilometer (Nanovea) and a pycnometer. Apart from these, COMSOL Multiphysics 4.4 has been used to solve heat and mass transfer physics. Results obtained from models of shrinkage and porosity is quite consistent with the experimental data. Successful implementation of these models would ensure the use of optimum energy in the course of drying and better quality retention of dried foods.