965 resultados para incremental computation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Drilling penetrated pre-Mesozoic crystalline basement beneath abbreviated sedimentary sequences overlying fault blocks in the southeastern Gulf of Mexico. At Hole 538A, located on Catoche Knoll, a foliated, regional metamorphic association of variably mylonitic felsic gneisses and interlayered amphibolite is intruded by post-tectonic diabase dikes. Hornblende from the amphibolite displays internally discordant 40Ar/39Ar age spectra, suggesting initial post-metamorphic cooling at about 500 Ma followed by a mild thermal disturbance at about 200 Ma. Biotite from the gneiss yields a plateau age of 348 Ma, which is interpreted to result from incorporation of extraneous argon components when the biotite system was opened during the about 200 Ma thermal overprint. A whole-rich diabase sample from Hole 538A records a crystallization age of 190.4 ± 3.4 Ma. A lower grade phyllitic metasedimentary sequence was penetrated at Hole 537, drilled about 30 km northwest of Catoche Knoll. Whole-rock phyllite samples display internally discordant 40Ar/39Ar age spectra, but plateau segments clearly document an early Paleozoic metamorphism at about 500 Ma. The age and lithologic character of the basement terrane penetrated at Holes 537 and 538A suggest that the drilled fault blocks are underlain by attenuated fragments of continental crust of "Pan-African" affinity. This supports pre-Mesozoic tectonic reconstructions that locate Yucatan in the present Gulf recess during the amalgamation of Pangea.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The literature on the use of free trade agreements (FTAs) has recently been growing because it is becoming more important to encourage the use of current FTAs than to increase the number of FTAs. In this paper, we discuss some practical issues in the computation of FTA utilization rates, which provide a useful measure to discover how much FTA schemes are used in trade. For example, compared with the use of customs data on FTA utilization in imports, when using certificates of origin data on FTA utilization in exports, there are several points about which we should be careful. Our practical guidance on the computation of FTA utilization rates will be helpful when computing such rates and in examining the determinants of those rates empirically.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Within the regression framework, we show how different levels of nonlinearity influence the instantaneous firing rate prediction of single neurons. Nonlinearity can be achieved in several ways. In particular, we can enrich the predictor set with basis expansions of the input variables (enlarging the number of inputs) or train a simple but different model for each area of the data domain. Spline-based models are popular within the first category. Kernel smoothing methods fall into the second category. Whereas the first choice is useful for globally characterizing complex functions, the second is very handy for temporal data and is able to include inner-state subject variations. Also, interactions among stimuli are considered. We compare state-of-the-art firing rate prediction methods with some more sophisticated spline-based nonlinear methods: multivariate adaptive regression splines and sparse additive models. We also study the impact of kernel smoothing. Finally, we explore the combination of various local models in an incremental learning procedure. Our goal is to demonstrate that appropriate nonlinearity treatment can greatly improve the results. We test our hypothesis on both synthetic data and real neuronal recordings in cat primary visual cortex, giving a plausible explanation of the results from a biological perspective.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays computing platforms consist of a very large number of components that require to be supplied with diferent voltage levels and power requirements. Even a very small platform, like a handheld computer, may contain more than twenty diferent loads and voltage regulators. The power delivery designers of these systems are required to provide, in a very short time, the right power architecture that optimizes the performance, meets electrical specifications plus cost and size targets. The appropriate selection of the architecture and converters directly defines the performance of a given solution. Therefore, the designer needs to be able to evaluate a significant number of options in order to know with good certainty whether the selected solutions meet the size, energy eficiency and cost targets. The design dificulties of selecting the right solution arise due to the wide range of power conversion products provided by diferent manufacturers. These products range from discrete components (to build converters) to complete power conversion modules that employ diferent manufacturing technologies. Consequently, in most cases it is not possible to analyze all the alternatives (combinations of power architectures and converters) that can be built. The designer has to select a limited number of converters in order to simplify the analysis. In this thesis, in order to overcome the mentioned dificulties, a new design methodology for power supply systems is proposed. This methodology integrates evolutionary computation techniques in order to make possible analyzing a large number of possibilities. This exhaustive analysis helps the designer to quickly define a set of feasible solutions and select the best trade-off in performance according to each application. The proposed approach consists of two key steps, one for the automatic generation of architectures and other for the optimized selection of components. In this thesis are detailed the implementation of these two steps. The usefulness of the methodology is corroborated by contrasting the results using real problems and experiments designed to test the limits of the algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Membrane systems are computational equivalent to Turing machines. However, their distributed and massively parallel nature obtains polynomial solutions opposite to traditional non-polynomial ones. At this point, it is very important to develop dedicated hardware and software implementations exploiting those two membrane systems features. Dealing with distributed implementations of P systems, the bottleneck communication problem has arisen. When the number of membranes grows up, the network gets congested. The purpose of distributed architectures is to reach a compromise between the massively parallel character of the system and the needed evolution step time to transit from one configuration of the system to the next one, solving the bottleneck communication problem. The goal of this paper is twofold. Firstly, to survey in a systematic and uniform way the main results regarding the way membranes can be placed on processors in order to get a software/hardware simulation of P-Systems in a distributed environment. Secondly, we improve some results about the membrane dissolution problem, prove that it is connected, and discuss the possibility of simulating this property in the distributed model. All this yields an improvement in the system parallelism implementation since it gets an increment of the parallelism of the external communication among processors. Proposed ideas improve previous architectures to tackle the communication bottleneck problem, such as reduction of the total time of an evolution step, increase of the number of membranes that could run on a processor and reduction of the number of processors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dendritic computation is a term that has been in neuro physiological research for a long time [1]. It is still controversial and far for been clarified within the concepts of both computation and neurophysiology [2], [3]. In any case, it hasnot been integrated neither in a formal computational scheme or structure, nor into formulations of artificial neural nets. Our objective here is to formulate a type of distributed computation that resembles dendritic trees, in such a way that it shows the advantages of neural network distributed computation, mostly the reliability that is shown under the existence of holes (scotomas) in the computing net, without ?blind spots?.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Global analyzers traditionally read and analyze the entire program at once, in a nonincremental way. However, there are many situations which are not well suited to this simple model and which instead require reanalysis of certain parts of a program which has already been analyzed. In these cases, it appears inecient to perform the analysis of the program again from scratch, as needs to be done with current systems. We describe how the xed-point algorithms used in current generic analysis engines for (constraint) logic programming languages can be extended to support incremental analysis. The possible changes to a program are classied into three types: addition, deletion, and arbitrary change. For each one of these, we provide one or more algorithms for identifying the parts of the analysis that must be recomputed and for performing the actual recomputation. The potential benets and drawbacks of these algorithms are discussed. Finally, we present some experimental results obtained with an implementation of the algorithms in the PLAI generic abstract interpretation framework. The results show signicant benets when using the proposed incremental analysis algorithms.