813 resultados para feature based modelling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the basis on which recruiters form hirability impressions for a job applicant is a key issue in organizational psychology and can be addressed as a social computing problem. We approach the problem from a face-to-face, nonverbal perspective where behavioral feature extraction and inference are automated. This paper presents a computational framework for the automatic prediction of hirability. To this end, we collected an audio-visual dataset of real job interviews where candidates were applying for a marketing job. We automatically extracted audio and visual behavioral cues related to both the applicant and the interviewer. We then evaluated several regression methods for the prediction of hirability scores and showed the feasibility of conducting such a task, with ridge regression explaining 36.2% of the variance. Feature groups were analyzed, and two main groups of behavioral cues were predictive of hirability: applicant audio features and interviewer visual cues, showing the predictive validity of cues related not only to the applicant, but also to the interviewer. As a last step, we analyzed the predictive validity of psychometric questionnaires often used in the personnel selection process, and found that these questionnaires were unable to predict hirability, suggesting that hirability impressions were formed based on the interaction during the interview rather than on questionnaire data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the factors that shape adaptive genetic variation across species niches has become of paramount importance in evolutionary ecology, especially to understand how adaptation to changing climate affects the geographic range of species. The distribution of adaptive alleles in the ecological niche is determined by the emergence of novel mutations, their fitness consequences and gene flow that connects populations across species niches. Striking demographical differences and source sink dynamics of populations between the centre and the margin of the niche can play a major role in the emergence and spread of adaptive alleles. Although some theoretical predictions have long been proposed, the origin and distribution of adaptive alleles within species niches remain untested. In this paper, we propose and discuss a novel empirical approach that combines landscape genetics with species niche modelling, to test whether alleles that confer local adaptation are more likely to occur in either marginal or central populations of species niches. We illustrate this new approach by using a published data set of 21 alpine plant species genotyped with a total of 2483 amplified fragment length polymorphisms (AFLP), distributed over more than 1733 sampling sites across the Alps. Based on the assumption that alleles that were statistically associated with environmental variables were adaptive, we found that adaptive alleles in the margin of a species niche were also present in the niche centre, which suggests that adaptation originates in the niche centre. These findings corroborate models of species range evolution, in which the centre of the niche contributes to the emergence of novel adaptive alleles, which diffuse towards niche margins and facilitate niche and range expansion through subsequent local adaptation. Although these results need to be confirmed via fitness measurements in natural populations and functionally characterised genetic sequences, this study provides a first step towards understanding how adaptive genetic variation emerges and shapes species niches and geographic ranges along environmental gradients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Disease-causing variants of a large number of genes trigger inherited retinal degeneration leading to photoreceptor loss. Because cones are essential for daylight and central vision such as reading, mobility, and face recognition, this review focuses on a variety of animal models for cone diseases. The pertinence of using these models to reveal genotype/phenotype correlations and to evaluate new therapeutic strategies is discussed. Interestingly, several large animal models recapitulate human diseases and can serve as a strong base from which to study the biology of disease and to assess the scale-up of new therapies. Examples of innovative approaches will be presented such as lentiviral-based transgenesis in pigs and adeno-associated virus (AAV)-gene transfer into the monkey eye to investigate the neural circuitry plasticity of the visual system. The models reported herein permit the exploration of common mechanisms that exist between different species and the identification and highlighting of pathways that may be specific to primates, including humans.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To quantify the relation between body mass index (BMI) and endometrial cancer risk, and to describe the shape of such a relation. DESIGN: Pooled analysis of three hospital-based case-control studies. SETTING: Italy and Switzerland. POPULATION: A total of 1449 women with endometrial cancer and 3811 controls. METHODS: Multivariate odds ratios (OR) and 95% confidence intervals (95% CI) were obtained from logistic regression models. The shape of the relation was determined using a class of flexible regression models. MAIN OUTCOME MEASURE: The relation of BMI with endometrial cancer. RESULTS: Compared with women with BMI 18.5 to <25 kg/m(2) , the odds ratio was 5.73 (95% CI 4.28-7.68) for women with a BMI ≥35 kg/m(2) . The odds ratios were 1.10 (95% CI 1.09-1.12) and 1.63 (95% CI 1.52-1.75) respectively for an increment of BMI of 1 and 5 units. The relation was stronger in never-users of oral contraceptives (OR 3.35, 95% CI 2.78-4.03, for BMI ≥30 versus <25 kg/m(2) ) than in users (OR 1.22, 95% CI 0.56-2.67), and in women with diabetes (OR 8.10, 95% CI 4.10-16.01, for BMI ≥30 versus <25 kg/m(2) ) than in those without diabetes (OR 2.95, 95% CI 2.44-3.56). The relation was best fitted by a cubic model, although after the exclusion of the 5% upper and lower tails, it was best fitted by a linear model. CONCLUSIONS: The results of this study confirm a role of elevated BMI in the aetiology of endometrial cancer and suggest that the risk in obese women increases in a cubic nonlinear fashion. The relation was stronger in never-users of oral contraceptives and in women with diabetes. TWEETABLE ABSTRACT: Risk of endometrial cancer increases with elevated body weight in a cubic nonlinear fashion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stochastic learning processes for a specific feature detector are studied. This technique is applied to nonsmooth multilayer neural networks requested to perform a discrimination task of order 3 based on the ssT-block¿ssC-block problem. Our system proves to be capable of achieving perfect generalization, after presenting finite numbers of examples, by undergoing a phase transition. The corresponding annealed theory, which involves the Ising model under external field, shows good agreement with Monte Carlo simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Integrated in a wide research assessing destabilizing and triggering factors to model cliff dynamic along the Dieppe's shoreline in High Normandy, this study aims at testing boat-based mobile LiDAR capabilities by scanning 3D point clouds of the unstable coastal cliffs. Two acquisition campaigns were performed in September 2012 and September 2013, scanning (1) a 30-km-long shoreline and (2) the same test cliffs in different environmental conditions and device settings. The potentials of collected data for 3D modelling, change detection and landslide monitoring were afterward assessed. By scanning during favourable meteorological and marine conditions and close to the coast, mobile LiDAR devices are able to quickly scan a long shoreline with median point spacing up to 10cm. The acquired data are then sufficiently detailed to map geomorphological features smaller than 0.5m2. Furthermore, our capability to detect rockfalls and erosion deposits (>m3) is confirmed, since using the classical approach of computing differences between sequential acquisitions reveals many cliff collapses between Pourville and Quiberville and only sparse changes between Dieppe and Belleville-sur-Mer. These different change rates result from different rockfall susceptibilities. Finally, we also confirmed the capability of the boat-based mobile LiDAR technique to monitor single large changes, characterizing the Dieppe landslide geometry with two main active scarps, retrogression up to 40m and about 100,000m3 of eroded materials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coating and filler pigments have strong influence to the properties of the paper. Filler content can be even over 30 % and pigment content in coating is about 85-95 weight percent. The physical and chemical properties of the pigments are different and the knowledge of these properties is important for optimising of optical and printing properties of the paper. The size and shape of pigment particles can be measured by different analysers which can be based on sedimentation, laser diffraction, changes in electric field etc. In this master's thesis was researched particle properties especially by scanning electron microscope (SEM) and image analysis programs. Research included nine pigments with different particle size and shape. Pigments were analysed by two image analysis programs (INCA Feature and Poikki), Coulter LS230 (laser diffraction) and SediGraph 5100 (sedimentation). The results were compared to perceive the effect of particle shape to the performance of the analysers. Only image analysis programs gave parameters of the particle shape. One part of research was also the sample preparation for SEM. Individual particles should be separated and distinct in ideal sample. Analysing methods gave different results but results from image analysis programs corresponded even to sedimentation or to laser diffraction depending on the particle shape. Detailed analysis of the particle shape required high magnification in SEM, but measured parameters described very well the shape of the particles. Large particles (ecd~1 µm) could be used also in 3D-modelling which enabled the measurement of the thickness of the particles. Scanning electron microscope and image analysis programs were effective and multifunctional tools for particle analyses. Development and experience will devise the usability of analysing method in routine use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper the authors propose a new closed contour descriptor that could be seen as a Feature Extractor of closed contours based on the Discrete Hartley Transform (DHT), its main characteristic is that uses only half of the coefficients required by Elliptical Fourier Descriptors (EFD) to obtain a contour approximation with similar error measure. The proposed closed contour descriptor provides an excellent capability of information compression useful for a great number of AI applications. Moreover it can provide scale, position and rotation invariance, and last but not least it has the advantage that both the parameterization and the reconstructed shape from the compressed set can be computed very efficiently by the fast Discrete Hartley Transform (DHT) algorithm. This Feature Extractor could be useful when the application claims for reversible features and when the user needs and easy measure of the quality for a given level of compression, scalable from low to very high quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite recent advances, early diagnosis of Alzheimer’s disease (AD) from electroencephalography (EEG) remains a difficult task. In this paper, we offer an added measure through which such early diagnoses can potentially be improved. One feature that has been used for discriminative classification is changes in EEG synchrony. So far, only the decrease of synchrony in the higher frequencies has been deeply analyzed. In this paper, we investigate the increase of synchrony found in narrow frequency ranges within the θ band. This particular increase of synchrony is used with the well-known decrease of synchrony in the band to enhance detectable differences between AD patients and healthy subjects. We propose a new synchrony ratio that maximizes the differences between two populations. The ratio is tested using two different data sets, one of them containing mild cognitive impairment patients and healthy subjects, and another one, containing mild AD patients and healthy subjects. The results presented in this paper show that classification rate is improved, and the statistical difference between AD patients and healthy subjects is increased using the proposed ratio.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rosin is a natural product from pine forests and it is used as a raw material in resinate syntheses. Resinates are polyvalent metal salts of rosin acids and especially Ca- and Ca/Mg- resinates find wide application in the printing ink industry. In this thesis, analytical methods were applied to increase general knowledge of resinate chemistry and the reaction kinetics was studied in order to model the non linear solution viscosity increase during resinate syntheses by the fusion method. Solution viscosity in toluene is an important quality factor for resinates to be used in printing inks. The concept of critical resinate concentration, c crit, was introduced to define an abrupt change in viscosity dependence on resinate concentration in the solution. The concept was then used to explain the non-inear solution viscosity increase during resinate syntheses. A semi empirical model with two estimated parameters was derived for the viscosity increase on the basis of apparent reaction kinetics. The model was used to control the viscosity and to predict the total reaction time of the resinate process. The kinetic data from the complex reaction media was obtained by acid value titration and by FTIR spectroscopic analyses using a conventional calibration method to measure the resinate concentration and the concentration of free rosin acids. A multivariate calibration method was successfully applied to make partial least square (PLS) models for monitoring acid value and solution viscosity in both mid-infrared (MIR) and near infrared (NIR) regions during the syntheses. The calibration models can be used for on line resinate process monitoring. In kinetic studies, two main reaction steps were observed during the syntheses. First a fast irreversible resination reaction occurs at 235 °C and then a slow thermal decarboxylation of rosin acids starts to take place at 265 °C. Rosin oil is formed during the decarboxylation reaction step causing significant mass loss as the rosin oil evaporates from the system while the viscosity increases to the target level. The mass balance of the syntheses was determined based on the resinate concentration increase during the decarboxylation reaction step. A mechanistic study of the decarboxylation reaction was based on the observation that resinate molecules are partly solvated by rosin acids during the syntheses. Different decarboxylation mechanisms were proposed for the free and solvating rosin acids. The deduced kinetic model supported the analytical data of the syntheses in a wide resinate concentration region, over a wide range of viscosity values and at different reaction temperatures. In addition, the application of the kinetic model to the modified resinate syntheses gave a good fit. A novel synthesis method with the addition of decarboxylated rosin (i.e. rosin oil) to the reaction mixture was introduced. The conversion of rosin acid to resinate was increased to the level necessary to obtain the target viscosity for the product at 235 °C. Due to a lower reaction temperature than in traditional fusion synthesis at 265 °C, thermal decarboxylation is avoided. As a consequence, the mass yield of the resinate syntheses can be increased from ca. 70% to almost 100% by recycling the added rosin oil.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fine powders of minerals are used commonly in the paper and paint industry, and for ceramics. Research for utilizing of different waste materials in these applications is environmentally important. In this work, the ultrafine grinding of two waste gypsum materials, namely FGD (Flue Gas Desulphurisation) gypsum and phosphogypsum from a phosphoric acid plant, with the attrition bead mill and with the jet mill has been studied. The ' objective of this research was to test the suitability of the attrition bead mill and of the jet mill to produce gypsum powders with a particle size of a few microns. The grinding conditions were optimised by studying the influences of different operational grinding parameters on the grinding rate and on the energy consumption of the process in order to achieve a product fineness such as that required in the paper industry with as low energy consumption as possible. Based on experimental results, the most influential parameters in the attrition grinding were found to be the bead size, the stirrer type, and the stirring speed. The best conditions, based on the product fineness and specific energy consumption of grinding, for the attrition grinding process is to grind the material with small grinding beads and a high rotational speed of the stirrer. Also, by using some suitable grinding additive, a finer product is achieved with a lower energy consumption. In jet mill grinding the most influential parameters were the feed rate, the volumetric flow rate of the grinding air, and the height of the internal classification tube. The optimised condition for the jet is to grind with a small feed rate and with a large rate of volumetric flow rate of grinding air when the inside tube is low. The finer product with a larger rate of production was achieved with the attrition bead mill than with the jet mill, thus the attrition grinding is better for the ultrafine grinding of gypsum than the jet grinding. Finally the suitability of the population balance model for simulation of grinding processes has been studied with different S , B , and C functions. A new S function for the modelling of an attrition mill and a new C function for the modelling of a jet mill were developed. The suitability of the selected models with the developed grinding functions was tested by curve fitting the particle size distributions of the grinding products and then comparing the fitted size distributions to the measured particle sizes. According to the simulation results, the models are suitable for the estimation and simulation of the studied grinding processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Synchronous motors are used mainly in large drives, for example in ship propulsion systems and in steel factories' rolling mills because of their high efficiency, high overload capacity and good performance in the field weakening range. This, however, requires an extremely good torque control system. A fast torque response and a torque accuracy are basic requirements for such a drive. For large power, high dynamic performance drives the commonly known principle of field oriented vector control has been used solely hitherto, but nowadays it is not the only way to implement such a drive. A new control method Direct Torque Control (DTC) has also emerged. The performance of such a high quality torque control as DTC in dynamically demanding industrial applications is mainly based on the accurate estimate of the various flux linkages' space vectors. Nowadays industrial motor control systems are real time applications with restricted calculation capacity. At the same time the control system requires a simple, fast calculable and reasonably accurate motor model. In this work a method to handle these problems in a Direct Torque Controlled (DTC) salient pole synchronous motor drive is proposed. A motor model which combines the induction law based "voltage model" and motor inductance parameters based "current model" is presented. The voltage model operates as a main model and is calculated at a very fast sampling rate (for example 40 kHz). The stator flux linkage calculated via integration from the stator voltages is corrected using the stator flux linkage computed from the current model. The current model acts as a supervisor that prevents only the motor stator flux linkage from drifting erroneous during longer time intervals. At very low speeds the role of the current model is emphasised but, nevertheless, the voltage model always stays the main model. At higher speeds the function of the current model correction is to act as a stabiliser of the control system. The current model contains a set of inductance parameters which must be known. The validation of the current model in steady state is not self evident. It depends on the accuracy of the saturated value of the inductances. Parameter measurement of the motor model where the supply inverter is used as a measurement signal generator is presented. This so called identification run can be performed prior to delivery or during drive commissioning. A derivation method for the inductance models used for the representation of the saturation effects is proposed. The performance of the electrically excited synchronous motor supplied with the DTC inverter is proven with experimental results. It is shown that it is possible to obtain a good static accuracy of the DTC's torque controller for an electrically excited synchronous motor. The dynamic response is fast and a new operation point is achieved without oscillation. The operation is stable throughout the speed range. The modelling of the magnetising inductance saturation is essential and cross saturation has to be considered as well. The effect of cross saturation is very significant. A DTC inverter can be used as a measuring equipment and the parameters needed for the motor model can be defined by the inverter itself. The main advantage is that the parameters defined are measured in similar magnetic operation conditions and no disagreement between the parameters will exist. The inductance models generated are adequate to meet the requirements of dynamically demanding drives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present thesis in focused on the minimization of experimental efforts for the prediction of pollutant propagation in rivers by mathematical modelling and knowledge re-use. Mathematical modelling is based on the well known advection-dispersion equation, while the knowledge re-use approach employs the methods of case based reasoning, graphical analysis and text mining. The thesis contribution to the pollutant transport research field consists of: (1) analytical and numerical models for pollutant transport prediction; (2) two novel techniques which enable the use of variable parameters along rivers in analytical models; (3) models for the estimation of pollutant transport characteristic parameters (velocity, dispersion coefficient and nutrient transformation rates) as functions of water flow, channel characteristics and/or seasonality; (4) the graphical analysis method to be used for the identification of pollution sources along rivers; (5) a case based reasoning tool for the identification of crucial information related to the pollutant transport modelling; (6) and the application of a software tool for the reuse of information during pollutants transport modelling research. These support tools are applicable in the water quality research field and in practice as well, as they can be involved in multiple activities. The models are capable of predicting pollutant propagation along rivers in case of both ordinary pollution and accidents. They can also be applied for other similar rivers in modelling of pollutant transport in rivers with low availability of experimental data concerning concentration. This is because models for parameter estimation developed in the present thesis enable the calculation of transport characteristic parameters as functions of river hydraulic parameters and/or seasonality. The similarity between rivers is assessed using case based reasoning tools, and additional necessary information can be identified by using the software for the information reuse. Such systems represent support for users and open up possibilities for new modelling methods, monitoring facilities and for better river water quality management tools. They are useful also for the estimation of environmental impact of possible technological changes and can be applied in the pre-design stage or/and in the practical use of processes as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents models and methods that have been used in producing forecasts of population growth. The work is intended to emphasize the reliability bounds of the model forecasts. Leslie model and various versions of logistic population models are presented. References to literature and several studies are given. A lot of relevant methodology has been developed in biological sciences. The Leslie modelling approach involves the use of current trends in mortality,fertility, migration and emigration. The model treats population divided in age groups and the model is given as a recursive system. Other group of models is based on straightforward extrapolation of census data. Trajectories of simple exponential growth function and logistic models are used to produce the forecast. The work presents the basics of Leslie type modelling and the logistic models, including multi- parameter logistic functions. The latter model is also analysed from model reliability point of view. Bayesian approach and MCMC method are used to create error bounds of the model predictions.