902 resultados para Dynamic search fireworks algorithm with covariance mutation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cette thèse est une collection de trois articles en macroéconomie et finances publiques. Elle développe des modèles d'Equilibre Général Dynamique et Stochastique pour analyser les implications macroéconomiques des politiques d'imposition des entreprises en présence de marchés financiers imparfaits. Le premier chapitre analyse les mécanismes de transmission à l'économie, des effets d'un ré-échelonnement de l'impôt sur le profit des entreprises. Dans une économie constituée d'un gouvernement, d'une firme représentative et d'un ménage représentatif, j'élabore un théorème de l'équivalence ricardienne avec l'impôt sur le profit des entreprises. Plus particulièrement, j'établis que si les marchés financiers sont parfaits, un ré-échelonnement de l'impôt sur le profit des entreprises qui ne change pas la valeur présente de l'impôt total auquel l'entreprise est assujettie sur toute sa durée de vie n'a aucun effet réel sur l'économie si l'état utilise un impôt forfaitaire. Ensuite, en présence de marchés financiers imparfaits, je montre qu'une une baisse temporaire de l'impôt forfaitaire sur le profit des entreprises stimule l'investissement parce qu'il réduit temporairement le coût marginal de l'investissement. Enfin, mes résultats indiquent que si l'impôt est proportionnel au profit des entreprises, l'anticipation de taxes élevées dans le futur réduit le rendement espéré de l'investissement et atténue la stimulation de l'investissement engendrée par la réduction d'impôt. Le deuxième chapitre est écrit en collaboration avec Rui Castro. Dans cet article, nous avons quantifié les effets sur les décisions individuelles d'investis-sement et de production des entreprises ainsi que sur les agrégats macroéconomiques, d'une baisse temporaire de l'impôt sur le profit des entreprises en présence de marchés financiers imparfaits. Dans un modèle où les entreprises sont sujettes à des chocs de productivité idiosyncratiques, nous avons d'abord établi que le rationnement de crédit affecte plus les petites (jeunes) entreprises que les grandes entreprises. Pour des entreprises de même taille, les entreprises les plus productives sont celles qui souffrent le plus du manque de liquidité résultant des imperfections du marché financier. Ensuite, nous montré que pour une baisse de 1 dollar du revenu de l'impôt, l'investissement et la production augmentent respectivement de 26 et 3,5 centimes. L'effet cumulatif indique une augmentation de l'investissement et de la production agrégés respectivement de 4,6 et 7,2 centimes. Au niveau individuel, nos résultats indiquent que la politique stimule l'investissement des petites entreprises, initialement en manque de liquidité, alors qu'elle réduit l'investissement des grandes entreprises, initialement non contraintes. Le troisième chapitre est consacré à l'analyse des effets de la réforme de l'imposition des revenus d'entreprise proposée par le Trésor américain en 1992. La proposition de réforme recommande l'élimination des impôts sur les dividendes et les gains en capital et l'imposition d'une seule taxe sur le revenu des entreprises. Pour ce faire, j'ai eu recours à un modèle dynamique stochastique d'équilibre général avec marchés financiers imparfaits dans lequel les entreprises sont sujettes à des chocs idiosyncratiques de productivité. Les résultats indiquent que l'abolition des impôts sur les dividendes et les gains en capital réduisent les distorsions dans les choix d'investissement des entreprises, stimule l'investissement et entraîne une meilleure allocation du capital. Mais pour être financièrement soutenable, la réforme nécessite un relèvement du taux de l'impôt sur le profit des entreprises de 34\% à 42\%. Cette hausse du taux d'imposition décourage l'accumulation du capital. En somme, la réforme engendre une baisse de l'accumulation du capital et de la production respectivement de 8\% et 1\%. Néanmoins, elle améliore l'allocation du capital de 20\%, engendrant des gains de productivité de 1.41\% et une modeste augmentation du bien être des consommateurs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present an efficient and robust method for the calculation of all S matrix elements (elastic, inelastic, and reactive) over an arbitrary energy range from a single real-symmetric Lanczos recursion. Our new method transforms the fundamental equations associated with Light's artificial boundary inhomogeneity approach [J. Chem. Phys. 102, 3262 (1995)] from the primary representation (original grid or basis representation of the Hamiltonian or its function) into a single tridiagonal Lanczos representation, thereby affording an iterative version of the original algorithm with greatly superior scaling properties. The method has important advantages over existing iterative quantum dynamical scattering methods: (a) the numerically intensive matrix propagation proceeds with real symmetric algebra, which is inherently more stable than its complex symmetric counterpart; (b) no complex absorbing potential or real damping operator is required, saving much of the exterior grid space which is commonly needed to support these operators and also removing the associated parameter dependence. Test calculations are presented for the collinear H+H-2 reaction, revealing excellent performance characteristics. (C) 2004 American Institute of Physics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The polypeptide backbones and side chains of proteins are constantly moving due to thermal motion and the kinetic energy of the atoms. The B-factors of protein crystal structures reflect the fluctuation of atoms about their average positions and provide important information about protein dynamics. Computational approaches to predict thermal motion are useful for analyzing the dynamic properties of proteins with unknown structures. In this article, we utilize a novel support vector regression (SVR) approach to predict the B-factor distribution (B-factor profile) of a protein from its sequence. We explore schemes for encoding sequences and various settings for the parameters used in SVR. Based on a large dataset of high-resolution proteins, our method predicts the B-factor distribution with a Pearson correlation coefficient (CC) of 0.53. In addition, our method predicts the B-factor profile with a CC of at least 0.56 for more than half of the proteins. Our method also performs well for classifying residues (rigid vs. flexible). For almost all predicted B-factor thresholds, prediction accuracies (percent of correctly predicted residues) are greater than 70%. These results exceed the best results of other sequence-based prediction methods. (C) 2005 Wiley-Liss, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Extraction and reconstruction of rectal wall structures from an ultrasound image is helpful for surgeons in rectal clinical diagnosis and 3-D reconstruction of rectal structures from ultrasound images. The primary task is to extract the boundary of the muscular layers on the rectal wall. However, due to the low SNR from ultrasound imaging and the thin muscular layer structure of the rectum, this boundary detection task remains a challenge. An active contour model is an effective high-level model, which has been used successfully to aid the tasks of object representation and recognition in many image-processing applications. We present a novel multigradient field active contour algorithm with an extended ability for multiple-object detection, which overcomes some limitations of ordinary active contour models—"snakes." The core part in the algorithm is the proposal of multigradient vector fields, which are used to replace image forces in kinetic function for alternative constraints on the deformation of active contour, thereby partially solving the initialization limitation of active contour for rectal wall boundary detection. An adaptive expanding force is also added to the model to help the active contour go through the homogenous region in the image. The efficacy of the model is explained and tested on the boundary detection of a ring-shaped image, a synthetic image, and an ultrasound image. The experimental results show that the proposed multigradient field-active contour is feasible for multilayer boundary detection of rectal wall

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Human perception is finely tuned to extract structure about the 4D world of time and space as well as properties such as color and texture. Developing intuitions about spatial structure beyond 4D requires exploiting other perceptual and cognitive abilities. One of the most natural ways to explore complex spaces is for a user to actively navigate through them, using local explorations and global summaries to develop intuitions about structure, and then testing the developing ideas by further exploration. This article provides a brief overview of a technique for visualizing surfaces defined over moderate-dimensional binary spaces, by recursively unfolding them onto a 2D hypergraph. We briefly summarize the uses of a freely available Web-based visualization tool, Hyperspace Graph Paper (HSGP), for exploring fitness landscapes and search algorithms in evolutionary computation. HSGP provides a way for a user to actively explore a landscape, from simple tasks such as mapping the neighborhood structure of different points, to seeing global properties such as the size and distribution of basins of attraction or how different search algorithms interact with landscape structure. It has been most useful for exploring recursive and repetitive landscapes, and its strength is that it allows intuitions to be developed through active navigation by the user, and exploits the visual system's ability to detect pattern and texture. The technique is most effective when applied to continuous functions over Boolean variables using 4 to 16 dimensions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Summarizing topological relations is fundamental to many spatial applications including spatial query optimization. In this article, we present several novel techniques to effectively construct cell density based spatial histograms for range (window) summarizations restricted to the four most important level-two topological relations: contains, contained, overlap, and disjoint. We first present a novel framework to construct a multiscale Euler histogram in 2D space with the guarantee of the exact summarization results for aligned windows in constant time. To minimize the storage space in such a multiscale Euler histogram, an approximate algorithm with the approximate ratio 19/12 is presented, while the problem is shown NP-hard generally. To conform to a limited storage space where a multiscale histogram may be allowed to have only k Euler histograms, an effective algorithm is presented to construct multiscale histograms to achieve high accuracy in approximately summarizing aligned windows. Then, we present a new approximate algorithm to query an Euler histogram that cannot guarantee the exact answers; it runs in constant time. We also investigate the problem of nonaligned windows and the problem of effectively partitioning the data space to support nonaligned window queries. Finally, we extend our techniques to 3D space. Our extensive experiments against both synthetic and real world datasets demonstrate that the approximate multiscale histogram techniques may improve the accuracy of the existing techniques by several orders of magnitude while retaining the cost efficiency, and the exact multiscale histogram technique requires only a storage space linearly proportional to the number of cells for many popular real datasets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe a network module detection approach which combines a rapid and robust clustering algorithm with an objective measure of the coherence of the modules identified. The approach is applied to the network of genetic regulatory interactions surrounding the tumor suppressor gene p53. This algorithm identifies ten clusters in the p53 network, which are visually coherent and biologically plausible.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Summarizing topological relations is fundamental to many spatial applications including spatial query optimization. In this paper, we present several novel techniques to eectively construct cell density based spatial histograms for range (window) summarizations restricted to the four most important topological relations: contains, contained, overlap, and disjoint. We rst present a novel framework to construct a multiscale histogram composed of multiple Euler histograms with the guarantee of the exact summarization results for aligned windows in constant time. Then we present an approximate algorithm, with the approximate ratio 19/12, to minimize the storage spaces of such multiscale Euler histograms, although the problem is generally NP-hard. To conform to a limited storage space where only k Euler histograms are allowed, an effective algorithm is presented to construct multiscale histograms to achieve high accuracy. Finally, we present a new approximate algorithm to query an Euler histogram that cannot guarantee the exact answers; it runs in constant time. Our extensive experiments against both synthetic and real world datasets demonstrated that the approximate mul- tiscale histogram techniques may improve the accuracy of the existing techniques by several orders of magnitude while retaining the cost effciency, and the exact multiscale histogram technique requires only a storage space linearly proportional to the number of cells for the real datasets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A crescente preocupação com a desonestidade acadêmica e seus possíveis impactos para as organizações e sociedade tem requerido especial atenção. Diversos estudos indicam que a tecnologia e, em especial, a Internet, pode ocasionar o aumento da desonestidade acadêmica e, em especial, da prática de plágio. Tipos de desonestidade acadêmica são listados pela literatura como sendo a fraude, o plágio, o auxílio externo e a fraude eletrônica. Dentre estes tipos, o plágio está se tornando a maior preocupação entre as instituições de ensino superior em comparação com os demais (LOVETT-HOOPER et al., 2007). A existência da intencionalidade do indivíduo é uma característica central nos estudos sobre plágio, caracterizado como sendo a consequência de uma decisão individual. Do ponto de vista da Theory of Planned Behavior - TPB (Teoria do Comportamento Planejado), de Ajzen (1991), a ação do indivíduo é orientada por crenças (comportamentais, normativas e de controle) que influenciam sua atitude em relação a algo, que por sua vez leva à racionalização da intenção que influenciará o comportamento do indivíduo. Esta pesquisa tem como objetivo identificar os fatores antecedentes que influenciam a atitude em relação ao plágio dentre estudantes brasileiros do ensino superior, modalidade à distância. Um sistemático mapeamento da literatura sobre o tema identificou mais de 300 artigos e convergiu para um número de 74 artigos considerados fundamentais. Destes, foi gerado um modelo de análise que define como preditores da Atitude Positiva em relação ao Plágio (a partir de determinadas influências recebidas, o indivíduo considerará a prática do plágio), os seguintes construtos: Posicionamento Moral, Normas sociais e Aspectos situacionais. Para análise do modelo, utilizou-se uma pesquisa do tipo survey quando, nesta fase foram encaminhados, 1800 questionários, a alunos de diferentes períodos do curso de Administração, de uma Universidade particular. A taxa de retorno dos questionários foi de 28,95%, totalizando 353 questionários válidos. Para a análise dos dados utilizou-se a modelagem por equações estruturais com algoritmo Partial Least Squares (PLS), técnica adequada para um número reduzido de observações e quando não se pode assumir parâmetros para a distribuição. Os principais resultados encontrados nesta pesquisa foram: 41,8% da variablidade da Atitude explicada do modelo de Atitude Positiva frente ao plágio; e a identificação de seis construtos significantes associados ao modelo, sendo: Entendimento (-0,102, p<0,05), Expectativa de Valor (0,243, p<0,001), Facilidade (0,108, p<0,05), Situação de Pressão (0,126, p<0,01), Relativismo (0,272, p<0,001) e Severidade e Possibilidade de Punição (-0,255, p<0,001).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present in this paper ideas to tackle the problem of analysing and forecasting nonstationary time series within the financial domain. Accepting the stochastic nature of the underlying data generator we assume that the evolution of the generator's parameters is restricted on a deterministic manifold. Therefore we propose methods for determining the characteristics of the time-localised distribution. Starting with the assumption of a static normal distribution we refine this hypothesis according to the empirical results obtained with the methods anc conclude with the indication of a dynamic non-Gaussian behaviour with varying dependency for the time series under consideration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The collect-and-place machine is one of the most widely used placement machines for assembling electronic components on the printed circuit boards (PCBs). Nevertheless, the number of researches concerning the optimisation of the machine performance is very few. This motivates us to study the component scheduling problem for this type of machine with the objective of minimising the total assembly time. The component scheduling problem is an integration of the component sequencing problem, that is, the sequencing of component placements; and the feeder arrangement problem, that is, the assignment of component types to feeders. To solve the component scheduling problem efficiently, a hybrid genetic algorithm is developed in this paper. A numerical example is used to compare the performance of the algorithm with different component grouping approaches and different population sizes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

According to some models of visual selective attention, objects in a scene activate corresponding neural representations, which compete for perceptual awareness and motor behavior. During a visual search for a target object, top-down control exerted by working memory representations of the target's defining properties resolves competition in favor of the target. These models, however, ignore the existence of associative links among object representations. Here we show that such associations can strongly influence deployment of attention in humans. In the context of visual search, objects associated with the target were both recalled more often and recognized more accurately than unrelated distractors. Notably, both target and associated objects competitively weakened recognition of unrelated distractors and slowed responses to a luminance probe. Moreover, in a speeded search protocol, associated objects rendered search both slower and less accurate. Finally, the first saccades after onset of the stimulus array were more often directed toward associated than control items.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Fibre Distributed Data Interface (FDDI) represents the new generation of local area networks (LANs). These high speed LANs are capable of supporting up to 500 users over a 100 km distance. User traffic is expected to be as diverse as file transfers, packet voice and video. As the proliferation of FDDI LANs continues, the need to interconnect these LANs arises. FDDI LAN interconnection can be achieved in a variety of different ways. Some of the most commonly used today are public data networks, dial up lines and private circuits. For applications that can potentially generate large quantities of traffic, such as an FDDI LAN, it is cost effective to use a private circuit leased from the public carrier. In order to send traffic from one LAN to another across the leased line, a routing algorithm is required. Much research has been done on the Bellman-Ford algorithm and many implementations of it exist in computer networks. However, due to its instability and problems with routing table loops it is an unsatisfactory algorithm for interconnected FDDI LANs. A new algorithm, termed ISIS which is being standardized by the ISO provides a far better solution. ISIS will be implemented in many manufacturers routing devices. In order to make the work as practical as possible, this algorithm will be used as the basis for all the new algorithms presented. The ISIS algorithm can be improved by exploiting information that is dropped by that algorithm during the calculation process. A new algorithm, called Down Stream Path Splits (DSPS), uses this information and requires only minor modification to some of the ISIS routing procedures. DSPS provides a higher network performance, with very little additional processing and storage requirements. A second algorithm, also based on the ISIS algorithm, generates a massive increase in network performance. This is achieved by selecting alternative paths through the network in times of heavy congestion. This algorithm may select the alternative path at either the originating node, or any node along the path. It requires more processing and memory storage than DSPS, but generates a higher network power. The final algorithm combines the DSPS algorithm with the alternative path algorithm. This is the most flexible and powerful of the algorithms developed. However, it is somewhat complex and requires a fairly large storage area at each node. The performance of the new routing algorithms is tested in a comprehensive model of interconnected LANs. This model incorporates the transport through physical layers and generates random topologies for routing algorithm performance comparisons. Using this model it is possible to determine which algorithm provides the best performance without introducing significant complexity and storage requirements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A detailed study has been made of the feasibility of adsorptive purification of slack waxes from traces of aromatic compounds using type 13X molecular sieves to achieve 0.01% aromatics in the product. The limited literature relating to the adsorption of high molecular weight aromatic compounds by zeolites was reviewed. Equilibrium isotherms were determined for typical individual aromatic compounds. Lower molecular weight, or more compact, molecules were preferentially adsorbed and the number of molecules captured by one unit cell decreased with increasing molecular weight of the adsorbate. An increase in adsorption temperature resulted in a decrease in the adsorption value. The isosteric heat of adsorption of differnt types of aromatic compounds was determined from pairs of isotherms at 303 K to 343 K at specific coverages. The lowest heats of adsorption were for dodecylbenzene and phenanthrene. Kinetics of adsorption were studied for different aromatic compounds. The diffusivity decreased significantly when a long alkyl chain was attached to the benzene ring e.g. in dodecylbenzene; molecules with small cross-sectional diameter e.g. cumene were adsorbed most rapidly. The sorption rate increased with temperature. Apparent activation energies increased with increasing polarity. In a study of the dynamic adsorption of selected aromatic compounds from binary solutions in isooctane or n-alkanes, naphthalene exhibited the best dynamic properties followed by dibenzothiophene and finally dodecylbenzene. The dynamic adsorption of naphthalene from different n-alkane solvents increased with a decrease in solvent molecular weight. A tentative mathematical approach is proposed for the prediction of dynamic breakthrough curves from equilibrium isotherms and kinetic data. The dynamic properties of liquid phase adsorption of aromatics from slack waxes were studied at different temperatures and concentrations. The optimum operating temperature was 543 K. The best dynamic performance was achieved with feeds of low aromatic content. The studies with individual aromatic compounds demonstrated the affinity of type NaX molecular sieves to adsorb aromatics in the concentration range 3% - 5% . Wax purification by adsorption was considered promising and extension of the experimental programme was recommended.