915 resultados para Model Classification


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The existence of juxtaposed regions of distinct cultures in spite of the fact that people's beliefs have a tendency to become more similar to each other's as the individuals interact repeatedly is a puzzling phenomenon in the social sciences. Here we study an extreme version of the frequency-dependent bias model of social influence in which an individual adopts the opinion shared by the majority of the members of its extended neighborhood, which includes the individual itself. This is a variant of the majority-vote model in which the individual retains its opinion in case there is a tie among the neighbors' opinions. We assume that the individuals are fixed in the sites of a square lattice of linear size L and that they interact with their nearest neighbors only. Within a mean-field framework, we derive the equations of motion for the density of individuals adopting a particular opinion in the single-site and pair approximations. Although the single-site approximation predicts a single opinion domain that takes over the entire lattice, the pair approximation yields a qualitatively correct picture with the coexistence of different opinion domains and a strong dependence on the initial conditions. Extensive Monte Carlo simulations indicate the existence of a rich distribution of opinion domains or clusters, the number of which grows with L(2) whereas the size of the largest cluster grows with ln L(2). The analysis of the sizes of the opinion domains shows that they obey a power-law distribution for not too large sizes but that they are exponentially distributed in the limit of very large clusters. In addition, similarly to other well-known social influence model-Axelrod's model-we found that these opinion domains are unstable to the effect of a thermal-like noise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of semialgebraic Lipschitz classification of quasihomogeneous polynomials on a Holder triangle is studied. For this problem, the ""moduli"" are described completely in certain combinatorial terms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study a general stochastic rumour model in which an ignorant individual has a certain probability of becoming a stifler immediately upon hearing the rumour. We refer to this special kind of stifler as an uninterested individual. Our model also includes distinct rates for meetings between two spreaders in which both become stiflers or only one does, so that particular cases are the classical Daley-Kendall and Maki-Thompson models. We prove a Law of Large Numbers and a Central Limit Theorem for the proportions of those who ultimately remain ignorant and those who have heard the rumour but become uninterested in it.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A mechanism for the kinetic instabilities observed in the galvanostatic electro-oxidation of methanol is suggested and a model developed. The model is investigated using stoichiometric network analysis as well as concepts from algebraic geometry (polynomial rings and ideal theory) revealing the occurrence of a Hopf and a saddle-node bifurcation. These analytical solutions are confirmed by numerical integration of the system of differential equations. (C) 2010 American Institute of Physics

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Currently there is a trend for the expansion of the area cropped with sugarcane (Saccharum officinarum L.), driven by an increase in the world demand for biofuels, due to economical, environmental, and geopolitical issues. Although sugarcane is traditionally harvested by burning dried leaves and tops, the unburned, mechanized harvest has been progressively adopted. The use of process based models is useful in understanding the effects of plant litter in soil C dynamics. The objective of this work was to use the CENTURY model in evaluating the effect of sugarcane residue management in the temporal dynamics of soil C. The approach taken in this work was to parameterize the CENTURY model for the sugarcane crop, to simulate the temporal dynamics of soil C, validating the model through field experiment data, and finally to make predictions in the long term regarding soil C. The main focus of this work was the comparison of soil C stocks between the burned and unburned litter management systems, but the effect of mineral fertilizer and organic residue applications were also evaluated. The simulations were performed with data from experiments with different durations, from 1 to 60 yr, in Goiana and Timbauba, Pernambuco, and Pradopolis, Sao Paulo, all in Brazil; and Mount Edgecombe, Kwazulu-Natal, South Africa. It was possible to simulate the temporal dynamics of soil C (R(2) = 0.89). The predictions made with the model revealed that there is, in the long term, a trend for higher soil C stocks with the unburned management. This increase is conditioned by factors such as climate, soil texture, time of adoption of the unburned system, and N fertilizer management.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stream discharge-concentration relationships are indicators of terrestrial ecosystem function. Throughout the Amazon and Cerrado regions of Brazil rapid changes in land use and land cover may be altering these hydrochemical relationships. The current analysis focuses on factors controlling the discharge-calcium (Ca) concentration relationship since previous research in these regions has demonstrated both positive and negative slopes in linear log(10)discharge-log(10)Ca concentration regressions. The objective of the current study was to evaluate factors controlling stream discharge-Ca concentration relationships including year, season, stream order, vegetation cover, land use, and soil classification. It was hypothesized that land use and soil class are the most critical attributes controlling discharge-Ca concentration relationships. A multilevel, linear regression approach was utilized with data from 28 streams throughout Brazil. These streams come from three distinct regions and varied broadly in watershed size (< 1 to > 10(6) ha) and discharge (10(-5.7)-10(3.2) m(3) s(-1)). Linear regressions of log(10)Ca versus log(10)discharge in 13 streams have a preponderance of negative slopes with only two streams having significant positive slopes. An ANOVA decomposition suggests the effect of discharge on Ca concentration is large but variable. Vegetation cover, which incorporates aspects of land use, explains the largest proportion of the variance in the effect of discharge on Ca followed by season and year. In contrast, stream order, land use, and soil class explain most of the variation in stream Ca concentration. In the current data set, soil class, which is related to lithology, has an important effect on Ca concentration but land use, likely through its effect on runoff concentration and hydrology, has a greater effect on discharge-concentration relationships.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quality control of toys for avoiding children exposure to potentially toxic elements is of utmost relevance and it is a common requirement in national and/or international norms for health and safety reasons. Laser-induced breakdown spectroscopy (LIBS) was recently evaluated at authors` laboratory for direct analysis of plastic toys and one of the main difficulties for the determination of Cd. Cr and Pb was the variety of mixtures and types of polymers. As most norms rely on migration (lixiviation) protocols, chemometric classification models from LIBS spectra were tested for sampling toys that present potential risk of Cd, Cr and Pb contamination. The classification models were generated from the emission spectra of 51 polymeric toys and by using Partial Least Squares - Discriminant Analysis (PLS-DA), Soft Independent Modeling of Class Analogy (SIMCA) and K-Nearest Neighbor (KNN). The classification models and validations were carried out with 40 and 11 test samples, respectively. Best results were obtained when KNN was used, with corrected predictions varying from 95% for Cd to 100% for Cr and Pb. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The search for more realistic modeling of financial time series reveals several stylized facts of real markets. In this work we focus on the multifractal properties found in price and index signals. Although the usual minority game (MG) models do not exhibit multifractality, we study here one of its variants that does. We show that the nonsynchronous MG models in the nonergodic phase is multifractal and in this sense, together with other stylized facts, constitute a better modeling tool. Using the structure function (SF) approach we detected the stationary and the scaling range of the time series generated by the MG model and, from the linear (non-linear) behavior of the SF we identified the fractal (multifractal) regimes. Finally, using the wavelet transform modulus maxima (WTMM) technique we obtained its multifractal spectrum width for different dynamical regimes. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the dynamics of the adoption of new products by agents with continuous opinions and discrete actions (CODA). The model is such that the refusal in adopting a new idea or product is increasingly weighted by neighbor agents as evidence against the product. Under these rules, we study the distribution of adoption times and the final proportion of adopters in the population. We compare the cases where initial adopters are clustered to the case where they are randomly scattered around the social network and investigate small world effects on the final proportion of adopters. The model predicts a fat tailed distribution for late adopters which is verified by empirical data. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Random Parameter model was proposed to explain the structure of the covariance matrix in problems where most, but not all, of the eigenvalues of the covariance matrix can be explained by Random Matrix Theory. In this article, we explore the scaling properties of the model, as observed in the multifractal structure of the simulated time series. We use the Wavelet Transform Modulus Maxima technique to obtain the multifractal spectrum dependence with the parameters of the model. The model shows a scaling structure compatible with the stylized facts for a reasonable choice of the parameter values. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Today several different unsupervised classification algorithms are commonly used to cluster similar patterns in a data set based only on its statistical properties. Specially in image data applications, self-organizing methods for unsupervised classification have been successfully applied for clustering pixels or group of pixels in order to perform segmentation tasks. The first important contribution of this paper refers to the development of a self-organizing method for data classification, named Enhanced Independent Component Analysis Mixture Model (EICAMM), which was built by proposing some modifications in the Independent Component Analysis Mixture Model (ICAMM). Such improvements were proposed by considering some of the model limitations as well as by analyzing how it should be improved in order to become more efficient. Moreover, a pre-processing methodology was also proposed, which is based on combining the Sparse Code Shrinkage (SCS) for image denoising and the Sobel edge detector. In the experiments of this work, the EICAMM and other self-organizing models were applied for segmenting images in their original and pre-processed versions. A comparative analysis showed satisfactory and competitive image segmentation results obtained by the proposals presented herein. (C) 2008 Published by Elsevier B.V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work we study an agent based model to investigate the role of asymmetric information degrees for market evolution. This model is quite simple and may be treated analytically since the consumers evaluate the quality of a certain good taking into account only the quality of the last good purchased plus her perceptive capacity beta. As a consequence, the system evolves according to a stationary Markov chain. The value of a good offered by the firms increases along with quality according to an exponent alpha, which is a measure of the technology. It incorporates all the technological capacity of the production systems such as education, scientific development and techniques that change the productivity rates. The technological level plays an important role to explain how the asymmetry of information may affect the market evolution in this model. We observe that, for high technological levels, the market can detect adverse selection. The model allows us to compute the maximum asymmetric information degree before the market collapses. Below this critical point the market evolves during a limited period of time and then dies out completely. When beta is closer to 1 (symmetric information), the market becomes more profitable for high quality goods, although high and low quality markets coexist. The maximum asymmetric information level is a consequence of an ergodicity breakdown in the process of quality evaluation. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traditionally, chronotype classification is based on the Morningness-Eveningness Questionnaire (MEQ). It is implicit in the classification that intermediate individuals get intermediate scores to most of the MEQ questions. However, a small group of individuals has a different pattern of answers. In some questions, they answer as ""morning-types"" and in some others they answer as ""evening-types,"" resulting in an intermediate total score. ""Evening-type"" and ""Morning-type"" answers were set as A(1) and A(4), respectively. Intermediate answers were set as A(2) and A(3). The following algorithm was applied: Bimodality Index = (Sigma A(1) x Sigma A(4))(2) - (Sigma A(2) x Sigma A(3))(2). Neither-types that had positive bimodality scores were classified as bimodal. If our hypothesis is validated by objective data, an update of chronotype classification will be required. (Author correspondence: brunojm@ymail.com)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Brazilian Atlantic Forest is one of the richest biodiversity hotspots of the world. Paleoclimatic models have predicted two large stability regions in its northern and central parts, whereas southern regions might have suffered strong instability during Pleistocene glaciations. Molecular phylogeographic and endemism studies show, nevertheless, contradictory results: although some results validate these predictions, other data suggest that paleoclimatic models fail to predict stable rainforest areas in the south. Most studies, however, have surveyed species with relatively high dispersal rates whereas taxa with lower dispersion capabilities should be better predictors of habitat stability. Here, we have used two land planarian species as model organisms to analyse the patterns and levels of nucleotide diversity on a locality within the Southern Atlantic Forest. We find that both species harbour high levels of genetic variability without exhibiting the molecular footprint of recent colonization or population expansions, suggesting a long-term stability scenario. The results reflect, therefore, that paleoclimatic models may fail to detect refugia in the Southern Atlantic Forest, and that model organisms with low dispersal capability can improve the resolution of these models.