973 resultados para Parallel programming models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the development and capabilities of the Smart Home system, people today are entering an era in which household appliances are no longer just controlled by people, but also operated by a Smart System. This results in a more efficient, convenient, comfortable, and environmentally friendly living environment. A critical part of the Smart Home system is Home Automation, which means that there is a Micro-Controller Unit (MCU) to control all the household appliances and schedule their operating times. This reduces electricity bills by shifting amounts of power consumption from the on-peak hour consumption to the off-peak hour consumption, in terms of different “hour price”. In this paper, we propose an algorithm for scheduling multi-user power consumption and implement it on an FPGA board, using it as the MCU. This algorithm for discrete power level tasks scheduling is based on dynamic programming, which could find a scheduling solution close to the optimal one. We chose FPGA as our system’s controller because FPGA has low complexity, parallel processing capability, a large amount of I/O interface for further development and is programmable on both software and hardware. In conclusion, it costs little time running on FPGA board and the solution obtained is good enough for the consumers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper treats the problem of setting the inventory level and optimizing the buffer allocation of closed-loop flow lines operating under the constant-work-in-process (CONWIP) protocol. We solve a very large but simple linear program that models an entire simulation run of a closed-loop flow line in discrete time to determine a production rate estimate of the system. This approach introduced in Helber, Schimmelpfeng, Stolletz, and Lagershausen (2011) for open flow lines with limited buffer capacities is extended to closed-loop CONWIP flow lines. Via this method, both the CONWIP level and the buffer allocation can be optimized simultaneously. The first part of a numerical study deals with the accuracy of the method. In the second part, we focus on the relationship between the CONWIP inventory level and the short-term profit. The accuracy of the method turns out to be best for such configurations that maximize production rate and/or short-term profit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Repeated evolution of the same phenotypic difference during independent episodes of speciation is strong evidence for selection during speciation. More than 1,000 species of cichlids, >10% of the world's freshwater fish species, have arisen within the past million years in Lakes Malawi and Victoria in eastern Africa. Many pairs of closely related sympatric species differ in their nuptial coloration in very similar ways. Nuptial coloration is important in their mate choice, and speciation by sexual selection on genetically or ecologically constrained variation in nuptial coloration had been proposed, which would repeatedly produce similar nuptial types in different populations, a prediction that was difficult to test in the absence of population-level phylogenies. We measured genetic similarity between individuals within and between populations, species, and lake regions by typing 59 individuals at >2,000 polymorphic genetic loci. From these data, we reconstructed, to our knowledge, the first larger species level phylogeny for the most diverse group of Lake Malawi cichlids. We used the genetic and phylogenetic data to test the divergent selection scenario against colonization, character displacement, and hybridization scenarios that could also explain diverse communities. Diversity has arisen by replicated radiations into the same color types, resulting in phenotypically very different, yet closely related, species within and phenotypically highly similar yet unrelated sets of species between regions, which is consistent with divergent selection during speciation and is inconsistent with colonization and character displacement models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The near nucleus coma of Comet 9P/Tempel 1 has been simulated with the 3D Direct Simulation Monte Carlo (DSMC) code PDSC++ (Su, C.-C. [2013]. Parallel Direct Simulation Monte Carlo (DSMC) Methods for Modeling Rarefied Gas Dynamics. PhD Thesis, National Chiao Tung University, Taiwan) and the derived column densities have been compared to observations of the water vapour distribution found by using infrared imaging spectrometer on the Deep Impact spacecraft (Feaga, L.M., A’Hearn, M.F., Sunshine, J.M., Groussin, O., Farnham, T.L. [2007]. Icarus 191(2), 134–145. http://dx.doi.org/10.1016/j.icarus.2007.04.038). Modelled total production rates are also compared to various observations made at the time of the Deep Impact encounter. Three different models were tested. For all models, the shape model constructed from the Deep Impact observations by Thomas et al. (Thomas, P.C., Veverka, J., Belton, M.J.S., Hidy, A., A’Hearn, M.F., Farnham, T.L., et al. [2007]. Icarus, 187(1), 4–15. http://dx.doi.org/10.1016/j.icarus.2006.12.013) was used. Outgassing depending only on the cosine of the solar insolation angle on each shape model facet is shown to provide an unsatisfactory model. Models constructed on the basis of active areas suggested by Kossacki and Szutowicz (Kossacki, K., Szutowicz, S. [2008]. Icarus, 195(2), 705–724. http://dx.doi.org/10.1016/j.icarus.2007.12.014) are shown to be superior. The Kossacki and Szutowicz model, however, also shows deficits which we have sought to improve upon. For the best model we investigate the properties of the outflow.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Here we present an improved astronomical timescale since 5 Ma as recorded in the ODP Site 1143 in the southern South China Sea, using a recently published Asian summer monsoon record (hematite to goethite content ratio, Hm/Gt) and a parallel benthic d18O record. Correlation of the benthic d18O record to the stack of 57 globally distributed benthic d18O records (LR04 stack) and the Hm/Gt curve to the 65°N summer insolation curve is a particularly useful approach to obtain refined timescales. Hence, it constitutes the basis for our effort. Our proposed modifications result in a more accurate and robust chronology than the existing astronomical timescale for the ODP Site 1143. This updated timescale further enables a detailed study of the orbital variability of low-latitude Asian summer monsoon throughout the Plio-Pleistocene. Comparison of the Hm/Gt record with the d18O record from the same core reveals that the oscillations of low-latitude Asian summer monsoon over orbital scales differed considerably from the glacial-interglacial climate cycles. The popular view that summer monsoon intensifies during interglacial stages and weakens during glacial stages appears to be too simplistic for low-latitude Asia. In low-latitude Asia, some strong summer monsoon intervals appear to have also occurred during glacial stages in addition to their increased occurrence during interglacial stages. Vice versa, some notably weak summer monsoon intervals have also occurred during interglacial stages next to their anticipated occurrence during glacial stages. The well-known mid-Pleistocene transition (MPT) is only identified in the benthic d18O record but not in the Hm/Gt record from the same core. This suggests that the MPT may be a feature of high- and middle-latitude climates, possibly determined by high-latitude ice sheet dynamics. For low-latitude monsoonal climate, its orbital-scale variations respond more directly to insolation and are little influenced by high-latitude processes, thus the MPT is likely not recorded. In addition, the Hm/Gt record suggests that low-latitude Asian summer monsoon intensity has a long-term decreasing trend since 2.8 Ma with increased oscillation amplitude. This long-term variability is presumably linked to the Northern Hemisphere glaciation since then.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Benthic foraminiferal stable isotope records from four high-resolution sediment cores, forming a depth transect between 1237 m and 2303 m on the South Iceland Rise, have been used to reconstruct intermediate and deep water paleoceanographic changes in the northern North Atlantic during the last 21 ka (spanning Termination I and the Holocene). Typically, a sampling resolution of ~100 years is attained. Deglacial core chronologies are accurately tied to North Greenland Ice Core Project (NGRIP) ice core records through the correlation of tephra layers and changes in the percent abundance of Neogloboquadrina pachyderma (sinistral) with transitions in NGRIP. The evolution from the glacial mode of circulation to the present regime is punctuated by two periods with low benthic d13C and d18O values, which do not lie on glacial or Holocene water mass mixing lines. These periods correlate with the late Younger Dryas/Early Holocene (11.5-12.2 ka) and Heinrich Stadial 1 (14.7-16.8 ka) during which time freshwater input and sea-ice formation led to brine rejection both locally and as an overflow exported from the Nordic seas into the northern North Atlantic, as earlier reported by Meland et al. (2008). The export of brine with low ?13C values from the Nordic seas complicates traditional interpretations of low d13C values during the deglaciation as incursions of southern sourced water, although the spatial extent of this brine is uncertain. The records also reveal that the onset of the Younger Dryas was accompanied by an abrupt and transient (~200-300 year duration) decrease in the ventilation of the northern North Atlantic. During the Holocene, Iceland-Scotland Overflow Water only reached its modern flow strength and/or depth over the South Iceland Rise by 7-8 ka, in parallel with surface ocean reorganizations and a cessation in deglacial meltwater input to the North Atlantic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Drake Passage (DP) is the major geographic constriction for the Antarctic Circumpolar Current (ACC) and exerts a strong control on the exchange of physical, chemical, and biological properties between the Atlantic, Pacific, and Indian Ocean basins. Resolving changes in the flow of circumpolar water masses through this gateway is, therefore, crucial for advancing our understanding of the Southern Ocean's role in global ocean and climate variability. Here, we reconstruct changes in DP throughflow dynamics over the past 65,000 y based on grain size and geochemical properties of sediment records from the southernmost continental margin of South America. Combined with published sediment records from the Scotia Sea, we argue for a considerable total reduction of DP transport and reveal an up to ~40% decrease in flow speed along the northernmost ACC pathway entering the DP during glacial times. Superimposed on this long-term decrease are high-amplitude, millennial-scale variations, which parallel Southern Ocean and Antarctic temperature patterns. The glacial intervals of strong weakening of the ACC entering the DP imply an enhanced export of northern ACC surface and intermediate waters into the South Pacific Gyre and reduced Pacific-Atlantic exchange through the DP ("cold water route"). We conclude that changes in DP throughflow play a critical role for the global meridional overturning circulation and interbasin exchange in the Southern Ocean, most likely regulated by variations in the westerly wind field and changes in Antarctic sea ice extent.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the early days of logic programming, researchers in the field realized the potential for exploitation of parallelism present in the execution of logic programs. Their high-level nature, the presence of nondeterminism, and their referential transparency, among other characteristics, make logic programs interesting candidates for obtaining speedups through parallel execution. At the same time, the fact that the typical applications of logic programming frequently involve irregular computations, make heavy use of dynamic data structures with logical variables, and involve search and speculation, makes the techniques used in the corresponding parallelizing compilers and run-time systems potentially interesting even outside the field. The objective of this article is to provide a comprehensive survey of the issues arising in parallel execution of logic programming languages along with the most relevant approaches explored to date in the field. Focus is mostly given to the challenges emerging from the parallel execution of Prolog programs. The article describes the major techniques used for shared memory implementation of Or-parallelism, And-parallelism, and combinations of the two. We also explore some related issues, such as memory management, compile-time analysis, and execution visualization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Membrane systems are computational equivalent to Turing machines. However, their distributed and massively parallel nature obtains polynomial solutions opposite to traditional non-polynomial ones. At this point, it is very important to develop dedicated hardware and software implementations exploiting those two membrane systems features. Dealing with distributed implementations of P systems, the bottleneck communication problem has arisen. When the number of membranes grows up, the network gets congested. The purpose of distributed architectures is to reach a compromise between the massively parallel character of the system and the needed evolution step time to transit from one configuration of the system to the next one, solving the bottleneck communication problem. The goal of this paper is twofold. Firstly, to survey in a systematic and uniform way the main results regarding the way membranes can be placed on processors in order to get a software/hardware simulation of P-Systems in a distributed environment. Secondly, we improve some results about the membrane dissolution problem, prove that it is connected, and discuss the possibility of simulating this property in the distributed model. All this yields an improvement in the system parallelism implementation since it gets an increment of the parallelism of the external communication among processors. Proposed ideas improve previous architectures to tackle the communication bottleneck problem, such as reduction of the total time of an evolution step, increase of the number of membranes that could run on a processor and reduction of the number of processors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have developed a new projector model specifically tailored for fast list-mode tomographic reconstructions in Positron emission tomography (PET) scanners with parallel planar detectors. The model provides an accurate estimation of the probability distribution of coincidence events defined by pairs of scintillating crystals. This distribution is parameterized with 2D elliptical Gaussian functions defined in planes perpendicular to the main axis of the tube of response (TOR). The parameters of these Gaussian functions have been obtained by fitting Monte Carlo simulations that include positron range, acolinearity of gamma rays, as well as detector attenuation and scatter effects. The proposed model has been applied efficiently to list-mode reconstruction algorithms. Evaluation with Monte Carlo simulations over a rotating high resolution PET scanner indicates that this model allows to obtain better recovery to noise ratio in OSEM (ordered-subsets, expectation-maximization) reconstruction, if compared to list-mode reconstruction with symmetric circular Gaussian TOR model, and histogram-based OSEM with precalculated system matrix using Monte Carlo simulated models and symmetries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El cálculo de relaciones binarias fue creado por De Morgan en 1860 para ser posteriormente desarrollado en gran medida por Peirce y Schröder. Tarski, Givant, Freyd y Scedrov demostraron que las álgebras relacionales son capaces de formalizar la lógica de primer orden, la lógica de orden superior así como la teoría de conjuntos. A partir de los resultados matemáticos de Tarski y Freyd, esta tesis desarrolla semánticas denotacionales y operacionales para la programación lógica con restricciones usando el álgebra relacional como base. La idea principal es la utilización del concepto de semántica ejecutable, semánticas cuya característica principal es el que la ejecución es posible utilizando el razonamiento estándar del universo semántico, este caso, razonamiento ecuacional. En el caso de este trabajo, se muestra que las álgebras relacionales distributivas con un operador de punto fijo capturan toda la teoría y metateoría estándar de la programación lógica con restricciones incluyendo los árboles utilizados en la búsqueda de demostraciones. La mayor parte de técnicas de optimización de programas, evaluación parcial e interpretación abstracta pueden ser llevadas a cabo utilizando las semánticas aquí presentadas. La demostración de la corrección de la implementación resulta extremadamente sencilla. En la primera parte de la tesis, un programa lógico con restricciones es traducido a un conjunto de términos relacionales. La interpretación estándar en la teoría de conjuntos de dichas relaciones coincide con la semántica estándar para CLP. Las consultas contra el programa traducido son llevadas a cabo mediante la reescritura de relaciones. Para concluir la primera parte, se demuestra la corrección y equivalencia operacional de esta nueva semántica, así como se define un algoritmo de unificación mediante la reescritura de relaciones. La segunda parte de la tesis desarrolla una semántica para la programación lógica con restricciones usando la teoría de alegorías—versión categórica del álgebra de relaciones—de Freyd. Para ello, se definen dos nuevos conceptos de Categoría Regular de Lawvere y _-Alegoría, en las cuales es posible interpretar un programa lógico. La ventaja fundamental que el enfoque categórico aporta es la definición de una máquina categórica que mejora e sistema de reescritura presentado en la primera parte. Gracias al uso de relaciones tabulares, la máquina modela la ejecución eficiente sin salir de un marco estrictamente formal. Utilizando la reescritura de diagramas, se define un algoritmo para el cálculo de pullbacks en Categorías Regulares de Lawvere. Los dominios de las tabulaciones aportan información sobre la utilización de memoria y variable libres, mientras que el estado compartido queda capturado por los diagramas. La especificación de la máquina induce la derivación formal de un juego de instrucciones eficiente. El marco categórico aporta otras importantes ventajas, como la posibilidad de incorporar tipos de datos algebraicos, funciones y otras extensiones a Prolog, a la vez que se conserva el carácter 100% declarativo de nuestra semántica. ABSTRACT The calculus of binary relations was introduced by De Morgan in 1860, to be greatly developed by Peirce and Schröder, as well as many others in the twentieth century. Using different formulations of relational structures, Tarski, Givant, Freyd, and Scedrov have shown how relation algebras can provide a variable-free way of formalizing first order logic, higher order logic and set theory, among other formal systems. Building on those mathematical results, we develop denotational and operational semantics for Constraint Logic Programming using relation algebra. The idea of executable semantics plays a fundamental role in this work, both as a philosophical and technical foundation. We call a semantics executable when program execution can be carried out using the regular theory and tools that define the semantic universe. Throughout this work, the use of pure algebraic reasoning is the basis of denotational and operational results, eliminating all the classical non-equational meta-theory associated to traditional semantics for Logic Programming. All algebraic reasoning, including execution, is performed in an algebraic way, to the point we could state that the denotational semantics of a CLP program is directly executable. Techniques like optimization, partial evaluation and abstract interpretation find a natural place in our algebraic models. Other properties, like correctness of the implementation or program transformation are easy to check, as they are carried out using instances of the general equational theory. In the first part of the work, we translate Constraint Logic Programs to binary relations in a modified version of the distributive relation algebras used by Tarski. Execution is carried out by a rewriting system. We prove adequacy and operational equivalence of the semantics. In the second part of the work, the relation algebraic approach is improved by using allegory theory, a categorical version of the algebra of relations developed by Freyd and Scedrov. The use of allegories lifts the semantics to typed relations, which capture the number of logical variables used by a predicate or program state in a declarative way. A logic program is interpreted in a _-allegory, which is in turn generated from a new notion of Regular Lawvere Category. As in the untyped case, program translation coincides with program interpretation. Thus, we develop a categorical machine directly from the semantics. The machine is based on relation composition, with a pullback calculation algorithm at its core. The algorithm is defined with the help of a notion of diagram rewriting. In this operational interpretation, types represent information about memory allocation and the execution mechanism is more efficient, thanks to the faithful representation of shared state by categorical projections. We finish the work by illustrating how the categorical semantics allows the incorporation into Prolog of constructs typical of Functional Programming, like abstract data types, and strict and lazy functions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The province of Salta is located the Northwest of Argentina in the border with Bolivia, Chile and Paraguay. Its Capital is the city of Salta that concentrates half of the inhabitants of the province and has grown to 600000 hab., from a small active Spanish town well founded in 1583. The city is crossed by the Arenales River descending from close mountains at North, source of water and end of sewers. But with actual growing it has become a focus of infection and of remarkable unhealthiness. It is necessary to undertake a plan for the recovery of the river, directed to the attainment of the well-being and to improve the life?s quality of the Community. The fundamental idea of the plan is to obtain an ordering of the river basin and an integral management of the channel and its surroundings, including the cleaning out. The improvement of the water?s quality, the healthiness of the surroundings and the improvement of the environment, must go hand by hand with the development of sport activities, of relaxation, tourism, establishment of breeding grounds, kitchen gardens, micro enterprises with clean production and other actions that contribute to their benefit by the society, that being a basic factor for their care and sustainable use. The present pollution is organic, chemical, industrial, domestic, due to the disposition of sweepings and sewer effluents that affects not only the flora and small fauna, destroying the biodiversity, but also to the health of people living in their margins. Within the plan it will be necessary to consider, besides hydric and environmental cleaning and the prevention of floods, the planning of the extraction of aggregates, the infrastructure and consolidation of margins works and the arrangement of all the river basin. It will be necessary to consider the public intervention at state, provincial and local level, and the private intervention. In the model it has been necessary to include the sub-model corresponding to the election of the entity to be the optimal instrument to reach the proposed objectives, giving an answer to the social, environmental and economic requirements. For that the authors have used multi-criteria decision methods to qualify and select alternatives, and for the programming of their implementation. In the model the authors have contemplated the short, average and long term actions. They conform a Paretooptimal alternative which secures the ordering, integral and suitable management of the basin of the Arenales River, focusing on its passage by the city of Salta.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The authors are from UPM and are relatively grouped, and all have intervened in different academic or real cases on the subject, at different times as being of different age. With precedent from E. Torroja and A. Páez in Madrid Spain Safety Probabilistic models for concrete about 1957, now in ICOSSAR conferences, author J.M. Antón involved since autumn 1967 for euro-steel construction in CECM produced a math model for independent load superposition reductions, and using it a load coefficient pattern for codes in Rome Feb. 1969, practically adopted for European constructions, giving in JCSS Lisbon Feb. 1974 suggestion of union for concrete-steel-al.. That model uses model for loads like Gumbel type I, for 50 years for one type of load, reduced to 1 year to be added to other independent loads, the sum set in Gumbel theories to 50 years return period, there are parallel models. A complete reliability system was produced, including non linear effects as from buckling, phenomena considered somehow in actual Construction Eurocodes produced from Model Codes. The system was considered by author in CEB in presence of Hydraulic effects from rivers, floods, sea, in reference with actual practice. When redacting a Road Drainage Norm in MOPU Spain an optimization model was realized by authors giving a way to determine the figure of Return Period, 10 to 50 years, for the cases of hydraulic flows to be considered in road drainage. Satisfactory examples were a stream in SE of Spain with Gumbel Type I model and a paper of Ven Te Chow with Mississippi in Keokuk using Gumbel type II, and the model can be modernized with more varied extreme laws. In fact in the MOPU drainage norm the redacting commission acted also as expert to set a table of return periods for elements of road drainage, in fact as a multi-criteria complex decision system. These precedent ideas were used e.g. in wide Codes, indicated in symposia or meetings, but not published in journals in English, and a condensate of contributions of authors is presented. The authors are somehow involved in optimization for hydraulic and agro planning, and give modest hints of intended applications in presence of agro and environment planning as a selection of the criteria and utility functions involved in bayesian, multi-criteria or mixed decision systems. Modest consideration is made of changing in climate, and on the production and commercial systems, and on others as social and financial.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years a lot of research has been invested in parallel processing of numerical applications. However, parallel processing of Symbolic and AI applications has received less attention. This paper presents a system for parallel symbolic computitig, narned ACE, based on the logic programming paradigm. ACE is a computational model for the full Prolog language, capable of exploiting Or-parall< lism and Independent And-parallelism. In this paper vve focus on the implementation of the and-parallel part of the ACE system (ralled &ACE) on a shared memory multiprocessor, d< scribing its organization, some optimizations, and presenting some performance figures, proving the abilhy of &ACE to efficiently exploit parallelism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a parallel graph narrowing machine, which is used to implement a functional logic language on a shared memory multiprocessor. It is an extensión of an abstract machine for a purely functional language. The result is a programmed graph reduction machine which integrates the mechanisms of unification, backtracking, and independent and-parallelism. In the machine, the subexpressions of an expression can run in parallel. In the case of backtracking, the structure of an expression is used to avoid the reevaluation of subexpressions as far as possible. Deterministic computations are detected. Their results are maintained and need not be reevaluated after backtracking.