30 resultados para Armington Assumption
Resumo:
This article discusses issues related to the organization and reception of information in the context of services and public information systems driven by technology. It stems from the assumption that in a ""technologized"" society, the distance between users and information is almost always of cognitive and socio-cultural nature, a product of our effort to design communication. In this context, we favor the approach of the information sign, seeking to answer how a documentary message turns into information, i.e. a structure recognized as socially useful. Observing the structural, cognitive and communicative aspects of the documentary message, based on Documentary Linguistics, Terminology, as well as on Textual Linguistics, the policy of knowledge management and innovation of the Government of the State of Sao Paulo is analyzed, which authorizes the use of Web 2.0, also questioning to what extent this initiative represents innovation in the environment of libraries.
Resumo:
The roots of swarm intelligence are deeply embedded in the biological study of self-organized behaviors in social insects. Particle swarm optimization (PSO) is one of the modern metaheuristics of swarm intelligence, which can be effectively used to solve nonlinear and non-continuous optimization problems. The basic principle of PSO algorithm is formed on the assumption that potential solutions (particles) will be flown through hyperspace with acceleration towards more optimum solutions. Each particle adjusts its flying according to the flying experiences of both itself and its companions using equations of position and velocity. During the process, the coordinates in hyperspace associated with its previous best fitness solution and the overall best value attained so far by other particles within the group are kept track and recorded in the memory. In recent years, PSO approaches have been successfully implemented to different problem domains with multiple objectives. In this paper, a multiobjective PSO approach, based on concepts of Pareto optimality, dominance, archiving external with elite particles and truncated Cauchy distribution, is proposed and applied in the design with the constraints presence of a brushless DC (Direct Current) wheel motor. Promising results in terms of convergence and spacing performance metrics indicate that the proposed multiobjective PSO scheme is capable of producing good solutions.
Resumo:
This work examines the extraction of mechanical properties from instrumented indentation P-h(s) curves via extensive three-dimensional finite element analyses for pyramidal tips in a wide range of solids under frictional and frictionless contact conditions. Since the topography of the imprint changes with the level of pile-up or sink-in, a relationship is identified between correction factor beta in the elastic equation for the unloading indentation stage and the amount of surface deformation effects. It is shown that the presumption of a constant beta significantly affects mechanical property extractions. Consequently, a new best-fit function is found for the correlation between penetration depth ratios h(e)/h(max), h(r)/h(max) and n, circumventing the need for the assumption of a constant value for beta, made in our prior investigation [Acta Mater. 53 (2005) pp. 3545-3561]. Simulations under frictional contact conditions provide sensible boundaries for the influence of friction on both h(e)/h(max) and h(r)/h(max). Friction is essentially found to induce an overestimation in the inferred n. Instrumented indentation experiments are also performed in three archetypal metallic materials exhibiting distinctly different contact responses. Mechanical property extractions are finally demonstrated in each of these materials.
Resumo:
In the MPC literature, stability is usually assured under the assumption that the state is measured. Since the closed-loop system may be nonlinear because of the constraints, it is not possible to apply the separation principle to prove global stability for the Output feedback case. It is well known that, a nonlinear closed-loop system with the state estimated via an exponentially converging observer combined with a state feedback controller can be unstable even when the controller is stable. One alternative to overcome the state estimation problem is to adopt a non-minimal state space model, in which the states are represented by measured past inputs and outputs [P.C. Young, M.A. Behzadi, C.L. Wang, A. Chotai, Direct digital and adaptative control by input-output, state variable feedback pole assignment, International journal of Control 46 (1987) 1867-1881; C. Wang, P.C. Young, Direct digital control by input-output, state variable feedback: theoretical background, International journal of Control 47 (1988) 97-109]. In this case, no observer is needed since the state variables can be directly measured. However, an important disadvantage of this approach is that the realigned model is not of minimal order, which makes the infinite horizon approach to obtain nominal stability difficult to apply. Here, we propose a method to properly formulate an infinite horizon MPC based on the output-realigned model, which avoids the use of an observer and guarantees the closed loop stability. The simulation results show that, besides providing closed-loop stability for systems with integrating and stable modes, the proposed controller may have a better performance than those MPC controllers that make use of an observer to estimate the current states. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
The classical approach for acoustic imaging consists of beamforming, and produces the source distribution of interest convolved with the array point spread function. This convolution smears the image of interest, significantly reducing its effective resolution. Deconvolution methods have been proposed to enhance acoustic images and have produced significant improvements. Other proposals involve covariance fitting techniques, which avoid deconvolution altogether. However, in their traditional presentation, these enhanced reconstruction methods have very high computational costs, mostly because they have no means of efficiently transforming back and forth between a hypothetical image and the measured data. In this paper, we propose the Kronecker Array Transform ( KAT), a fast separable transform for array imaging applications. Under the assumption of a separable array, it enables the acceleration of imaging techniques by several orders of magnitude with respect to the fastest previously available methods, and enables the use of state-of-the-art regularized least-squares solvers. Using the KAT, one can reconstruct images with higher resolutions than was previously possible and use more accurate reconstruction techniques, opening new and exciting possibilities for acoustic imaging.
Resumo:
In this paper we obtain the linear minimum mean square estimator (LMMSE) for discrete-time linear systems subject to state and measurement multiplicative noises and Markov jumps on the parameters. It is assumed that the Markov chain is not available. By using geometric arguments we obtain a Kalman type filter conveniently implementable in a recurrence form. The stationary case is also studied and a proof for the convergence of the error covariance matrix of the LMMSE to a stationary value under the assumption of mean square stability of the system and ergodicity of the associated Markov chain is obtained. It is shown that there exists a unique positive semi-definite solution for the stationary Riccati-like filter equation and, moreover, this solution is the limit of the error covariance matrix of the LMMSE. The advantage of this scheme is that it is very easy to implement and all calculations can be performed offline. (c) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This work presents a method for predicting resource availability in opportunistic grids by means of use pattern analysis (UPA), a technique based on non-supervised learning methods. This prediction method is based on the assumption of the existence of several classes of computational resource use patterns, which can be used to predict the resource availability. Trace-driven simulations validate this basic assumptions, which also provide the parameter settings for the accurate learning of resource use patterns. Experiments made with an implementation of the UPA method show the feasibility of its use in the scheduling of grid tasks with very little overhead. The experiments also demonstrate the method`s superiority over other predictive and non-predictive methods. An adaptative prediction method is suggested to deal with the lack of training data at initialization. Further adaptative behaviour is motivated by experiments which show that, in some special environments, reliable resource use patterns may not always be detected. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
The zero-inflated negative binomial model is used to account for overdispersion detected in data that are initially analyzed under the zero-Inflated Poisson model A frequentist analysis a jackknife estimator and a non-parametric bootstrap for parameter estimation of zero-inflated negative binomial regression models are considered In addition an EM-type algorithm is developed for performing maximum likelihood estimation Then the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes and some ways to perform global influence analysis are derived In order to study departures from the error assumption as well as the presence of outliers residual analysis based on the standardized Pearson residuals is discussed The relevance of the approach is illustrated with a real data set where It is shown that zero-inflated negative binomial regression models seems to fit the data better than the Poisson counterpart (C) 2010 Elsevier B V All rights reserved
Resumo:
Background Meta-analysis is increasingly being employed as a screening procedure in large-scale association studies to select promising variants for follow-up studies. However, standard methods for meta-analysis require the assumption of an underlying genetic model, which is typically unknown a priori. This drawback can introduce model misspecifications, causing power to be suboptimal, or the evaluation of multiple genetic models, which augments the number of false-positive associations, ultimately leading to waste of resources with fruitless replication studies. We used simulated meta-analyses of large genetic association studies to investigate naive strategies of genetic model specification to optimize screenings of genome-wide meta-analysis signals for further replication. Methods Different methods, meta-analytical models and strategies were compared in terms of power and type-I error. Simulations were carried out for a binary trait in a wide range of true genetic models, genome-wide thresholds, minor allele frequencies (MAFs), odds ratios and between-study heterogeneity (tau(2)). Results Among the investigated strategies, a simple Bonferroni-corrected approach that fits both multiplicative and recessive models was found to be optimal in most examined scenarios, reducing the likelihood of false discoveries and enhancing power in scenarios with small MAFs either in the presence or in absence of heterogeneity. Nonetheless, this strategy is sensitive to tau(2) whenever the susceptibility allele is common (MAF epsilon 30%), resulting in an increased number of false-positive associations compared with an analysis that considers only the multiplicative model. Conclusion Invoking a simple Bonferroni adjustment and testing for both multiplicative and recessive models is fast and an optimal strategy in large meta-analysis-based screenings. However, care must be taken when examined variants are common, where specification of a multiplicative model alone may be preferable.
Resumo:
The metallic voice is usually confused with ring or nasality by singers and nontrained listeners. who are not used to perceptual vocal analysis. They believe a metallic voice results from a rise in fundamental frequency. A diagnostic error in this aspect may lead to lowering pitch, an incorrect procedure that Could Cause vocal overload and fatigue. The purpose of this article is to Study the quality of metallic voice considering the correlation between information of the physiological and acoustic plans, based on a perceptive consensual assumption. Fiberscopic video pharyngolaryngoscopy was performed on 21 professional singers while speaking vowel [e]-in normal and metallic modes to observe muscular movements and structural changes of the velopharynx, pharynx, and larynx. Vocal samples captured simultaneously to the fiberscopic examination were acoustically analyzed. Frequency and amplitude of the first four formants (F(1), F(2), F(3), and F(4)) were extracted by means of linear predictor coefficients (LPC) Spectrum and were statistically analyzed. Vocal tract adjustments such as velar lowering, pharyngeal wall narrowing, laryngeal rise, aryepiglottic, and lateral laryngeal constrictions were frequently found: there were no significant changes in frequency and amplitude of F(1) in the metallic voiced there were significant increases in amplitudes of F(2), F(3), and F(4) and in frequency for F, metallic Voice perceived as louder was correlated to an increase ill amplitude of F(3) and F(4). Physiological adjustments of velopharynx, pharynx, and larynx are combined in characterizing the metallic voice and can be acoustically related to changes in formant pattern.
Resumo:
In this paper we study the possible microscopic origin of heavy-tailed probability density distributions for the price variation of financial instruments. We extend the standard log-normal process to include another random component in the so-called stochastic volatility models. We study these models under an assumption, akin to the Born-Oppenheimer approximation, in which the volatility has already relaxed to its equilibrium distribution and acts as a background to the evolution of the price process. In this approximation, we show that all models of stochastic volatility should exhibit a scaling relation in the time lag of zero-drift modified log-returns. We verify that the Dow-Jones Industrial Average index indeed follows this scaling. We then focus on two popular stochastic volatility models, the Heston and Hull-White models. In particular, we show that in the Hull-White model the resulting probability distribution of log-returns in this approximation corresponds to the Tsallis (t-Student) distribution. The Tsallis parameters are given in terms of the microscopic stochastic volatility model. Finally, we show that the log-returns for 30 years Dow Jones index data is well fitted by a Tsallis distribution, obtaining the relevant parameters. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Hantaviruses are rodent-borne Bunyaviruses that infect the Arvicolinae, Murinae, and Sigmodontinae subfamilies of Muridae. The rate of molecular evolution in the hantaviruses has been previously estimated at approximately 10(-7) nucleotide substitutions per site, per year (substitutions/site/year), based on the assumption of codivergence and hence shared divergence times with their rodent hosts. If substantiated, this would make the hantaviruses among the slowest evolving of all RNA viruses. However, as hantaviruses replicate with an RNA-dependent RNA polymerase, with error rates in the region of one mutation per genome replication, this low rate of nucleotide substitution is anomalous. Here, we use a Bayesian coalescent approach to estimate the rate of nucleotide substitution from serially sampled gene sequence data for hantaviruses known to infect each of the 3 rodent subfamilies: Araraquara virus ( Sigmodontinae), Dobrava virus ( Murinae), Puumala virus ( Arvicolinae), and Tula virus ( Arvicolinae). Our results reveal that hantaviruses exhibit shortterm substitution rates of 10(-2) to 10(-4) substitutions/site/year and so are within the range exhibited by other RNA viruses. The disparity between this substitution rate and that estimated assuming rodent-hantavirus codivergence suggests that the codivergence hypothesis may need to be reevaluated.
Resumo:
Functional MRI (fMRI) data often have low signal-to-noise-ratio (SNR) and are contaminated by strong interference from other physiological sources. A promising tool for extracting signals, even under low SNR conditions, is blind source separation (BSS), or independent component analysis (ICA). BSS is based on the assumption that the detected signals are a mixture of a number of independent source signals that are linearly combined via an unknown mixing matrix. BSS seeks to determine the mixing matrix to recover the source signals based on principles of statistical independence. In most cases, extraction of all sources is unnecessary; instead, a priori information can be applied to extract only the signal of interest. Herein we propose an algorithm based on a variation of ICA, called Dependent Component Analysis (DCA), where the signal of interest is extracted using a time delay obtained from an autocorrelation analysis. We applied such method to inspect functional Magnetic Resonance Imaging (fMRI) data, aiming to find the hemodynamic response that follows neuronal activation from an auditory stimulation, in human subjects. The method localized a significant signal modulation in cortical regions corresponding to the primary auditory cortex. The results obtained by DCA were also compared to those of the General Linear Model (GLM), which is the most widely used method to analyze fMRI datasets.
Resumo:
Background and Purpose-Functional MRI is a powerful tool to investigate recovery of brain function in patients with stroke. An inherent assumption in functional MRI data analysis is that the blood oxygenation level-dependent (BOLD) signal is stable over the course of the examination. In this study, we evaluated the validity of such assumption in patients with chronic stroke. Methods-Fifteen patients performed a simple motor task with repeated epochs using the paretic and the unaffected hand in separate runs. The corresponding BOLD signal time courses were extracted from the primary and supplementary motor areas of both hemispheres. Statistical maps were obtained by the conventional General Linear Model and by a parametric General Linear Model. Results-Stable BOLD amplitude was observed when the task was executed with the unaffected hand. Conversely, the BOLD signal amplitude in both primary and supplementary motor areas was progressively attenuated in every patient when the task was executed with the paretic hand. The conventional General Linear Model analysis failed to detect brain activation during movement of the paretic hand. However, the proposed parametric General Linear Model corrected the misdetection problem and showed robust activation in both primary and supplementary motor areas. Conclusions-The use of data analysis tools that are built on the premise of a stable BOLD signal may lead to misdetection of functional regions and underestimation of brain activity in patients with stroke. The present data urge the use of caution when relying on the BOLD response as a marker of brain reorganization in patients with stroke. (Stroke. 2010; 41:1921-1926.)
Resumo:
The magnitude of the basic reproduction ratio R(0) of an epidemic can be estimated in several ways, namely, from the final size of the epidemic, from the average age at first infection, or from the initial growth phase of the outbreak. In this paper, we discuss this last method for estimating R(0) for vector-borne infections. Implicit in these models is the assumption that there is an exponential phase of the outbreaks, which implies that in all cases R(0) > 1. We demonstrate that an outbreak is possible, even in cases where R(0) is less than one, provided that the vector-to-human component of R(0) is greater than one and that a certain number of infected vectors are introduced into the affected population. This theory is applied to two real epidemiological dengue situations in the southeastern part of Brazil, one where R(0) is less than one, and other one where R(0) is greater than one. In both cases, the model mirrors the real situations with reasonable accuracy.