25 resultados para Deterministic walkers
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
In this paper,we present a novel texture analysis method based on deterministic partially self-avoiding walks and fractal dimension theory. After finding the attractors of the image (set of pixels) using deterministic partially self-avoiding walks, they are dilated in direction to the whole image by adding pixels according to their relevance. The relevance of each pixel is calculated as the shortest path between the pixel and the pixels that belongs to the attractors. The proposed texture analysis method is demonstrated to outperform popular and state-of-the-art methods (e.g. Fourier descriptors, occurrence matrix, Gabor filter and local binary patterns) as well as deterministic tourist walk method and recent fractal methods using well-known texture image datasets.
Resumo:
Complex networks have attracted increasing interest from various fields of science. It has been demonstrated that each complex network model presents specific topological structures which characterize its connectivity and dynamics. Complex network classification relies on the use of representative measurements that describe topological structures. Although there are a large number of measurements, most of them are correlated. To overcome this limitation, this paper presents a new measurement for complex network classification based on partially self-avoiding walks. We validate the measurement on a data set composed by 40000 complex networks of four well-known models. Our results indicate that the proposed measurement improves correct classification of networks compared to the traditional ones. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4737515]
Resumo:
Walking on irregular surfaces and in the presence of unexpected events is a challenging problem for bipedal machines. Up to date, their ability to cope with gait disturbances is far less successful than humans': Neither trajectory controlled robots, nor dynamic walking machines (Limit CycleWalkers) are able to handle them satisfactorily. On the contrary, humans reject gait perturbations naturally and efficiently relying on their sensory organs that, if needed, elicit a recovery action. A similar approach may be envisioned for bipedal robots and exoskeletons: An algorithm continuously observes the state of the walker and, if an unexpected event happens, triggers an adequate reaction. This paper presents a monitoring algorithm that provides immediate detection of any type of perturbation based solely on a phase representation of the normal walking of the robot. The proposed method was evaluated in a Limit Cycle Walker prototype that suffered push and trip perturbations at different moments of the gait cycle, providing 100% successful detections for the current experimental apparatus and adequately tuned parameters, with no false positives when the robot is walking unperturbed.
Resumo:
In this paper, the effects of uncertainty and expected costs of failure on optimum structural design are investigated, by comparing three distinct formulations of structural optimization problems. Deterministic Design Optimization (DDO) allows one the find the shape or configuration of a structure that is optimum in terms of mechanics, but the formulation grossly neglects parameter uncertainty and its effects on structural safety. Reliability-based Design Optimization (RBDO) has emerged as an alternative to properly model the safety-under-uncertainty part of the problem. With RBDO, one can ensure that a minimum (and measurable) level of safety is achieved by the optimum structure. However, results are dependent on the failure probabilities used as constraints in the analysis. Risk optimization (RO) increases the scope of the problem by addressing the compromising goals of economy and safety. This is accomplished by quantifying the monetary consequences of failure, as well as the costs associated with construction, operation and maintenance. RO yields the optimum topology and the optimum point of balance between economy and safety. Results are compared for some example problems. The broader RO solution is found first, and optimum results are used as constraints in DDO and RBDO. Results show that even when optimum safety coefficients are used as constraints in DDO, the formulation leads to configurations which respect these design constraints, reduce manufacturing costs but increase total expected costs (including expected costs of failure). When (optimum) system failure probability is used as a constraint in RBDO, this solution also reduces manufacturing costs but by increasing total expected costs. This happens when the costs associated with different failure modes are distinct. Hence, a general equivalence between the formulations cannot be established. Optimum structural design considering expected costs of failure cannot be controlled solely by safety factors nor by failure probability constraints, but will depend on actual structural configuration. (c) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Texture image analysis is an important field of investigation that has attracted the attention from computer vision community in the last decades. In this paper, a novel approach for texture image analysis is proposed by using a combination of graph theory and partially self-avoiding deterministic walks. From the image, we build a regular graph where each vertex represents a pixel and it is connected to neighboring pixels (pixels whose spatial distance is less than a given radius). Transformations on the regular graph are applied to emphasize different image features. To characterize the transformed graphs, partially self-avoiding deterministic walks are performed to compose the feature vector. Experimental results on three databases indicate that the proposed method significantly improves correct classification rate compared to the state-of-the-art, e.g. from 89.37% (original tourist walk) to 94.32% on the Brodatz database, from 84.86% (Gabor filter) to 85.07% on the Vistex database and from 92.60% (original tourist walk) to 98.00% on the plant leaves database. In view of these results, it is expected that this method could provide good results in other applications such as texture synthesis and texture segmentation. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Recently there has been a considerable interest in dynamic textures due to the explosive growth of multimedia databases. In addition, dynamic texture appears in a wide range of videos, which makes it very important in applications concerning to model physical phenomena. Thus, dynamic textures have emerged as a new field of investigation that extends the static or spatial textures to the spatio-temporal domain. In this paper, we propose a novel approach for dynamic texture segmentation based on automata theory and k-means algorithm. In this approach, a feature vector is extracted for each pixel by applying deterministic partially self-avoiding walks on three orthogonal planes of the video. Then, these feature vectors are clustered by the well-known k-means algorithm. Although the k-means algorithm has shown interesting results, it only ensures its convergence to a local minimum, which affects the final result of segmentation. In order to overcome this drawback, we compare six methods of initialization of the k-means. The experimental results have demonstrated the effectiveness of our proposed approach compared to the state-of-the-art segmentation methods.
Resumo:
Dynamic texture is a recent field of investigation that has received growing attention from computer vision community in the last years. These patterns are moving texture in which the concept of selfsimilarity for static textures is extended to the spatiotemporal domain. In this paper, we propose a novel approach for dynamic texture representation, that can be used for both texture analysis and segmentation. In this method, deterministic partially self-avoiding walks are performed in three orthogonal planes of the video in order to combine appearance and motion features. We validate our method on three applications of dynamic texture that present interesting challenges: recognition, clustering and segmentation. Experimental results on these applications indicate that the proposed method improves the dynamic texture representation compared to the state of the art.
Resumo:
In this paper we study how deterministic features presented by a system can be used to perform direct transport in a quasisymmetric potential and weak dissipative system. We show that the presence of nonhyperbolic regions around acceleration areas of the phase space plays an important role in the acceleration of particles giving rise to direct transport in the system. Such an effect can be observed for a large interval of the weak asymmetric potential parameter allowing the possibility to obtain useful work from unbiased nonequilibrium fluctuation in real systems even in a presence of a quasisymmetric potential.
Resumo:
The use of antiretroviral therapy has proven to be remarkably effective in controlling the progression of human immunodeficiency virus (HIV) infection and prolonging patient's survival. Therapy however may fail and therefore these benefits can be compromised by the emergence of HIV strains that are resistant to the therapy. In view of these facts, the question of finding the reason for which drug-resistant strains emerge during therapy has become a worldwide problem of great interest. This paper presents a deterministic HIV-1 model to examine the mechanisms underlying the emergence of drug-resistance during therapy. The aim of this study is to determine whether, and how fast, antiretroviral therapy may determine the emergence of drug resistance by calculating the basic reproductive numbers. The existence, feasibility and local stability of the equilibriums are also analyzed. By performing numerical simulations we show that Hopf bifurcation may occur. The model suggests that the individuals with drug-resistant infection may play an important role in the epidemic of HIV. (C) 2011 Elsevier Ireland Ltd. All rights reserved.
Resumo:
Titan's optical and near-IR spectra result primarily from the scattering of sunlight by haze and its absorption by methane. With a column abundance of 92 km amagat (11 times that of Earth), Titan's atmosphere is optically thick and only similar to 10% of the incident solar radiation reaches the surface, compared to 57% on Earth. Such a formidable atmosphere obstructs investigations of the moon's lower troposphere and surface, which are highly sensitive to the radiative transfer treatment of methane absorption and haze scattering. The absorption and scattering characteristics of Titan's atmosphere have been constrained by the Huygens Probe Descent Imager/Spectral Radiometer (DISR) experiment for conditions at the probe landing site (Tomasko, M.G., Bezard, B., Doose, L., Engel, S., Karkoschka, E. 120084 Planet. Space Sci. 56, 624-247: Tomasko, M.G. et al. [2008b] Planet. Space Sci. 56, 669-707). Cassini's Visual and Infrared Mapping Spectrometer (VIMS) data indicate that the rest of the atmosphere (except for the polar regions) can be understood with small perturbations in the high haze structure determined at the landing site (Penteado, P.F., Griffith, CA., Tomasko, M.G., Engel, S., See, C., Doose, L, Baines, K.H., Brown, R.H., Buratti, B.J., Clark, R., Nicholson, P., Sotin, C. [2010]. Icarus 206, 352-365). However the in situ measurements were analyzed with a doubling and adding radiative transfer calculation that differs considerably from the discrete ordinates codes used to interpret remote data from Cassini and ground-based measurements. In addition, the calibration of the VIMS data with respect to the DISR data has not yet been tested. Here, VIMS data of the probe landing site are analyzed with the DISR radiative transfer method and the faster discrete ordinates radiative transfer calculation; both models are consistent (to within 0.3%) and reproduce the scattering and absorption characteristics derived from in situ measurements. Constraints on the atmospheric opacity at wavelengths outside those measured by DISR, that is from 1.6 to 5.0 mu m, are derived using clouds as diffuse reflectors in order to derive Titan's surface albedo to within a few percent error and cloud altitudes to within 5 km error. VIMS spectra of Titan at 2.6-3.2 mu m indicate not only spectral features due to CH4 and CH3D (Rannou, P., Cours, T., Le Mouelic, S., Rodriguez, S., Sotin, C., Drossart, P., Brown, R. [2010]. Icarus 208, 850-867), but also a fairly uniform absorption of unknown source, equivalent to the effects of a darkening of the haze to a single scattering albedo of 0.63 +/- 0.05. Titan's 4.8 mu m spectrum point to a haze optical depth of 0.2 at that wavelength. Cloud spectra at 2 mu m indicate that the far wings of the Voigt profile extend 460 cm(-1) from methane line centers. This paper releases the doubling and adding radiative transfer code developed by the DISR team, so that future studies of Titan's atmosphere and surface are consistent with the findings by the Huygens Probe. We derive the surface albedo at eight spectral regions of the 8 x 12 km(2) area surrounding the Huygens landing site. Within the 0.4-1.6 mu m spectral region our surface albedos match DISR measurements, indicating that DISR and VIMS measurements are consistently calibrated. These values together with albedos at longer 1.9-5.0 mu m wavelengths, not sampled by DISR, resemble a dark version of the spectrum of Ganymede's icy leading hemisphere. The eight surface albedos of the landing site are consistent with, but not deterministic of, exposed water ice with dark impurities. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
We present a study of the stellar parameters and iron abundances of 18 giant stars in six open clusters. The analysis was based on high-resolution and high-S/N spectra obtained with the UVES spectrograph (VLT-UT2). The results complement our previous study where 13 clusters were already analyzed. The total sample of 18 clusters is part of a program to search for planets around giant stars. The results show that the 18 clusters cover a metallicity range between -0.23 and +0.23 dex. Together with the derivation of the stellar masses, these metallicities will allow the metallicity and mass effects to be disentangled when analyzing the frequency of planets as a function of these stellar parameters.
Resumo:
The amount of information exchanged per unit of time between two nodes in a dynamical network or between two data sets is a powerful concept for analysing complex systems. This quantity, known as the mutual information rate (MIR), is calculated from the mutual information, which is rigorously defined only for random systems. Moreover, the definition of mutual information is based on probabilities of significant events. This work offers a simple alternative way to calculate the MIR in dynamical (deterministic) networks or between two time series (not fully deterministic), and to calculate its upper and lower bounds without having to calculate probabilities, but rather in terms of well known and well defined quantities in dynamical systems. As possible applications of our bounds, we study the relationship between synchronisation and the exchange of information in a system of two coupled maps and in experimental networks of coupled oscillators.
Resumo:
Maize is one of the most important crops in the world. The products generated from this crop are largely used in the starch industry, the animal and human nutrition sector, and biomass energy production and refineries. For these reasons, there is much interest in figuring the potential grain yield of maize genotypes in relation to the environment in which they will be grown, as the productivity directly affects agribusiness or farm profitability. Questions like these can be investigated with ecophysiological crop models, which can be organized according to different philosophies and structures. The main objective of this work is to conceptualize a stochastic model for predicting maize grain yield and productivity under different conditions of water supply while considering the uncertainties of daily climate data. Therefore, one focus is to explain the model construction in detail, and the other is to present some results in light of the philosophy adopted. A deterministic model was built as the basis for the stochastic model. The former performed well in terms of the curve shape of the above-ground dry matter over time as well as the grain yield under full and moderate water deficit conditions. Through the use of a triangular distribution for the harvest index and a bivariate normal distribution of the averaged daily solar radiation and air temperature, the stochastic model satisfactorily simulated grain productivity, i.e., it was found that 10,604 kg ha(-1) is the most likely grain productivity, very similar to the productivity simulated by the deterministic model and for the real conditions based on a field experiment.
Resumo:
This article suggests a pricing model for commodities used to produce biofuel. The model is based on the concept that the deterministic component of the Wiener process is not constant and depends on time and exogenous variables. The model, which incorporates theory of storage, the convenience yield and the seasonality of harvests, was applied in the Brazilian sugar market. After predictions were made with the Kalman filter, the model produced results that were statistically more accurate than those returned by the two-factor model available in the literature.
Resumo:
We prove that asymptotically (as n -> infinity) almost all graphs with n vertices and C(d)n(2-1/2d) log(1/d) n edges are universal with respect to the family of all graphs with maximum degree bounded by d. Moreover, we provide an efficient deterministic embedding algorithm for finding copies of bounded degree graphs in graphs satisfying certain pseudorandom properties. We also prove a counterpart result for random bipartite graphs, where the threshold number of edges is even smaller but the embedding is randomized.