952 resultados para Run-Time Code Generation, Programming Languages, Object-Oriented Programming


Relevância:

100.00% 100.00%

Publicador:

Resumo:

An effective solution to model and apply planning domain knowledge for deliberation and action in probabilistic, agent-oriented control is presented. Specifically, the addition of a task structure planning component and supporting components to an agent-oriented architecture and agent implementation is described. For agent control in risky or uncertain environments, an approach and method of goal reduction to task plan sets and schedules of action is presented. Additionally, some issues related to component-wise, situation-dependent control of a task planning agent that schedules its tasks separately from planning them are motivated and discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Buscamos avanzar en el conocimiento sobre cómo se desarrolla el movimiento de la sociedad argentina en su conjunto, analizado desde un caso particular que es el noreste del Chubut en el período que va desde la imposición de la hegemonía del capital financiero en 1989-1990 hasta el año 2005. Se pretende en esta tesis entender y conceptualizar en términos científicos cómo se mueve esa sociedad, de qué son expresión cada uno de sus conflictos, qué expresan los distintos sectores que se movilizan, qué expresan los cambios estructurales que se desarrollan, etc. Desde esta perspectiva nos interesa específicamente poder aportar al debate sobre cuáles son las opciones de plantear un proyecto alternativo de desarrollo para la provincia, la región y el país. Dentro del tema general de buscar comprender el movimiento de la sociedad, realizamos un abordaje de la realidad desde la utilización de un conocimiento acumulado y desde un recorte de esa realidad. Ese recorte lo realizamos a dos niveles. En primer lugar en términos espaciales: tomamos como base una región que denominamos el noreste de Chubut. Recortamos de la provincia del Chubut al área que tuvo el mayor crecimiento vinculado a los programas de polos de desarrollo durante las décadas del '60, '70 y parte del '80. La misma está delimitada por los actuales departamentos de Rawson y Biedma, de acuerdo a la división administrativa que toma la provincia desde 1957. En segundo lugar realizamos un recorte en términos temporales: abordamos como objeto de estudio al movimiento de la sociedad en esa región durante el período que va de 1989-1990 hasta el 2005. Consideramos que tomar este período nos permite observar el proceso de cambios que se generan con la realización de la hegemonía del capital financiero y el proceso de protestas, luchas y conflictos sociales que en el marco de estos cambios se desarrollan en la sociedad. La decisión de estudiar hasta el 2005 parte de considerar relevante comprender cómo continua el proceso después del 2001-2002. En esos años se produce la recuperación de la economía nacional, con un gran impulso para la región, lo cual genera el interés de poder precisar de qué se trata esta 'recuperación' y si estamos ante un movimiento orgánico o coyuntural de la economía. El período también nos permitió profundizar en el debate de las relaciones de fuerzas políticas, al hacer observable el período en que la burguesía logra recuperar la representación institucional como expresión legítima de la sociedad y consigue frenar la protesta social

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Buscamos avanzar en el conocimiento sobre cómo se desarrolla el movimiento de la sociedad argentina en su conjunto, analizado desde un caso particular que es el noreste del Chubut en el período que va desde la imposición de la hegemonía del capital financiero en 1989-1990 hasta el año 2005. Se pretende en esta tesis entender y conceptualizar en términos científicos cómo se mueve esa sociedad, de qué son expresión cada uno de sus conflictos, qué expresan los distintos sectores que se movilizan, qué expresan los cambios estructurales que se desarrollan, etc. Desde esta perspectiva nos interesa específicamente poder aportar al debate sobre cuáles son las opciones de plantear un proyecto alternativo de desarrollo para la provincia, la región y el país. Dentro del tema general de buscar comprender el movimiento de la sociedad, realizamos un abordaje de la realidad desde la utilización de un conocimiento acumulado y desde un recorte de esa realidad. Ese recorte lo realizamos a dos niveles. En primer lugar en términos espaciales: tomamos como base una región que denominamos el noreste de Chubut. Recortamos de la provincia del Chubut al área que tuvo el mayor crecimiento vinculado a los programas de polos de desarrollo durante las décadas del '60, '70 y parte del '80. La misma está delimitada por los actuales departamentos de Rawson y Biedma, de acuerdo a la división administrativa que toma la provincia desde 1957. En segundo lugar realizamos un recorte en términos temporales: abordamos como objeto de estudio al movimiento de la sociedad en esa región durante el período que va de 1989-1990 hasta el 2005. Consideramos que tomar este período nos permite observar el proceso de cambios que se generan con la realización de la hegemonía del capital financiero y el proceso de protestas, luchas y conflictos sociales que en el marco de estos cambios se desarrollan en la sociedad. La decisión de estudiar hasta el 2005 parte de considerar relevante comprender cómo continua el proceso después del 2001-2002. En esos años se produce la recuperación de la economía nacional, con un gran impulso para la región, lo cual genera el interés de poder precisar de qué se trata esta 'recuperación' y si estamos ante un movimiento orgánico o coyuntural de la economía. El período también nos permitió profundizar en el debate de las relaciones de fuerzas políticas, al hacer observable el período en que la burguesía logra recuperar la representación institucional como expresión legítima de la sociedad y consigue frenar la protesta social

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Buscamos avanzar en el conocimiento sobre cómo se desarrolla el movimiento de la sociedad argentina en su conjunto, analizado desde un caso particular que es el noreste del Chubut en el período que va desde la imposición de la hegemonía del capital financiero en 1989-1990 hasta el año 2005. Se pretende en esta tesis entender y conceptualizar en términos científicos cómo se mueve esa sociedad, de qué son expresión cada uno de sus conflictos, qué expresan los distintos sectores que se movilizan, qué expresan los cambios estructurales que se desarrollan, etc. Desde esta perspectiva nos interesa específicamente poder aportar al debate sobre cuáles son las opciones de plantear un proyecto alternativo de desarrollo para la provincia, la región y el país. Dentro del tema general de buscar comprender el movimiento de la sociedad, realizamos un abordaje de la realidad desde la utilización de un conocimiento acumulado y desde un recorte de esa realidad. Ese recorte lo realizamos a dos niveles. En primer lugar en términos espaciales: tomamos como base una región que denominamos el noreste de Chubut. Recortamos de la provincia del Chubut al área que tuvo el mayor crecimiento vinculado a los programas de polos de desarrollo durante las décadas del '60, '70 y parte del '80. La misma está delimitada por los actuales departamentos de Rawson y Biedma, de acuerdo a la división administrativa que toma la provincia desde 1957. En segundo lugar realizamos un recorte en términos temporales: abordamos como objeto de estudio al movimiento de la sociedad en esa región durante el período que va de 1989-1990 hasta el 2005. Consideramos que tomar este período nos permite observar el proceso de cambios que se generan con la realización de la hegemonía del capital financiero y el proceso de protestas, luchas y conflictos sociales que en el marco de estos cambios se desarrollan en la sociedad. La decisión de estudiar hasta el 2005 parte de considerar relevante comprender cómo continua el proceso después del 2001-2002. En esos años se produce la recuperación de la economía nacional, con un gran impulso para la región, lo cual genera el interés de poder precisar de qué se trata esta 'recuperación' y si estamos ante un movimiento orgánico o coyuntural de la economía. El período también nos permitió profundizar en el debate de las relaciones de fuerzas políticas, al hacer observable el período en que la burguesía logra recuperar la representación institucional como expresión legítima de la sociedad y consigue frenar la protesta social

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The deployment of LOOME was performed by lowering the LOOME frame by winch, followed by positioning of the surface sensors across the most active site by ROV. The frame was placed on an inactive slab of hydrates, eastwards and adjacent to the hot spot. As part of the LOOME-frame Sun & Sea multi parameter probe CTD 60M was deployed approximately 3 m above the seafloor. The device was rated to 2000 m water depth. As energy supply a DeepSea Power & Light SeaBattery (12V) was used, which allows a run time of the CTD 60M of more than a year. The memory capacity of the probe is sufficient to allow data storage for more than a year as well, applying a time resolution of better than one measurement per minute. The probe was configured to start running when the energy supply is connected and a magnetic switch is closed. An LED on top of CTD is indicating the current state of the probe. The major aim was to record the temperature and pressure regime in the bottom water at the Håkon Mosby Mud Volcano.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis contributes to the analysis and design of printed reflectarray antennas. The main part of the work is focused on the analysis of dual offset antennas comprising two reflectarray surfaces, one of them acts as sub-reflector and the second one acts as mainreflector. These configurations introduce additional complexity in several aspects respect to conventional dual offset reflectors, however they present a lot of degrees of freedom that can be used to improve the electrical performance of the antenna. The thesis is organized in four parts: the development of an analysis technique for dualreflectarray antennas, a preliminary validation of such methodology using equivalent reflector systems as reference antennas, a more rigorous validation of the software tool by manufacturing and testing a dual-reflectarray antenna demonstrator and the practical design of dual-reflectarray systems for some applications that show the potential of these kind of configurations to scan the beam and to generate contoured beams. In the first part, a general tool has been implemented to analyze high gain antennas which are constructed of two flat reflectarray structures. The classic reflectarray analysis based on MoM under local periodicity assumption is used for both sub and main reflectarrays, taking into account the incident angle on each reflectarray element. The incident field on the main reflectarray is computed taking into account the field radiated by all the elements on the sub-reflectarray.. Two approaches have been developed, one which employs a simple approximation to reduce the computer run time, and the other which does not, but offers in many cases, improved accuracy. The approximation is based on computing the reflected field on each element on the main reflectarray only once for all the fields radiated by the sub-reflectarray elements, assuming that the response will be the same because the only difference is a small variation on the angle of incidence. This approximation is very accurate when the reflectarray elements on the main reflectarray show a relatively small sensitivity to the angle of incidence. An extension of the analysis technique has been implemented to study dual-reflectarray antennas comprising a main reflectarray printed on a parabolic surface, or in general in a curved surface. In many applications of dual-reflectarray configurations, the reflectarray elements are in the near field of the feed-horn. To consider the near field radiated by the horn, the incident field on each reflectarray element is computed using a spherical mode expansion. In this region, the angles of incidence are moderately wide, and they are considered in the analysis of the reflectarray to better calculate the actual incident field on the sub-reflectarray elements. This technique increases the accuracy for the prediction of co- and cross-polar patterns and antenna gain respect to the case of using ideal feed models. In the second part, as a preliminary validation, the proposed analysis method has been used to design a dual-reflectarray antenna that emulates previous dual-reflector antennas in Ku and W-bands including a reflectarray as subreflector. The results for the dualreflectarray antenna compare very well with those of the parabolic reflector and reflectarray subreflector; radiation patterns, antenna gain and efficiency are practically the same when the main parabolic reflector is substituted by a flat reflectarray. The results show that the gain is only reduced by a few tenths of a dB as a result of the ohmic losses in the reflectarray. The phase adjustment on two surfaces provided by the dual-reflectarray configuration can be used to improve the antenna performance in some applications requiring multiple beams, beam scanning or shaped beams. Third, a very challenging dual-reflectarray antenna demonstrator has been designed, manufactured and tested for a more rigorous validation of the analysis technique presented. The proposed antenna configuration has the feed, the sub-reflectarray and the main-reflectarray in the near field one to each other, so that the conventional far field approximations are not suitable for the analysis of such antenna. This geometry is used as benchmarking for the proposed analysis tool in very stringent conditions. Some aspects of the proposed analysis technique that allow improving the accuracy of the analysis are also discussed. These improvements include a novel method to reduce the inherent cross polarization which is introduced mainly from grounded patch arrays. It has been checked that cross polarization in offset reflectarrays can be significantly reduced by properly adjusting the patch dimensions in the reflectarray in order to produce an overall cancellation of the cross-polarization. The dimensions of the patches are adjusted in order not only to provide the required phase-distribution to shape the beam, but also to exploit the crosses by zero of the cross-polarization components. The last part of the thesis deals with direct applications of the technique described. The technique presented is directly applicable to the design of contoured beam antennas for DBS applications, where the requirements of cross-polarisation are very stringent. The beam shaping is achieved by synthesithing the phase distribution on the main reflectarray while the sub-reflectarray emulates an equivalent hyperbolic subreflector. Dual-reflectarray antennas present also the ability to scan the beam over small angles about boresight. Two possible architectures for a Ku-band antenna are also described based on a dual planar reflectarray configuration that provides electronic beam scanning in a limited angular range. In the first architecture, the beam scanning is achieved by introducing a phase-control in the elements of the sub-reflectarray and the mainreflectarray is passive. A second alternative is also studied, in which the beam scanning is produced using 1-bit control on the main reflectarray, while a passive subreflectarray is designed to provide a large focal distance within a compact configuration. The system aims to develop a solution for bi-directional satellite links for emergency communications. In both proposed architectures, the objective is to provide a compact optics and simplicity to be folded and deployed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The solaR package includes a set of functions to calculate the solar radiation incident on a photovoltaic generator and simulate the performance of several applications of the photovoltaic energy. This package performs the whole calculation procedure from both daily and intradaily global horizontal irradiation to the final productivity of grid connected PV systems and water pumping PV systems. The package stands on a set of S4 classes. The core of each class is a group of slots with yearly, monthly, daily and intradaily multivariate time series (with the zoo package ). The classes share a variety of methods to access the information (for example, as.zooD provides a zoo object with the daily multivariate time series of the corresponding object) and several visualisation methods based on the lattice andlatticeExtra packages.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern FPGAs with run-time reconfiguration allow the implementation of complex systems offering both the flexibility of software-based solutions combined with the performance of hardware. This combination of characteristics, together with the development of new specific methodologies, make feasible to reach new points of the system design space, and make embedded systems built on these platforms acquire more and more importance. However, the practical exploitation of this technique in fields that traditionally have relied on resource restricted embedded systems, is mainly limited by strict power consumption requirements, the cost and the high dependence of DPR techniques with the specific features of the device technology underneath. In this work, we tackle the previously reported problems, designing a reconfigurable platform based on the low-cost and low-power consuming Spartan-6 FPGA family. The full process to develop the platform will be detailed in the paper from scratch. In addition, the implementation of the reconfiguration mechanism, including two profiles, is reported. The first profile is a low-area and low-speed reconfiguration engine based mainly on software functions running on the embedded processor, while the other one is a hardware version of the same engine, implemented in the FPGA logic. This reconfiguration hardware block has been originally designed to the Virtex-5 family, and its porting process will be also described in this work, facing the interoperability problem among different families.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several types of parallelism can be exploited in logic programs while preserving correctness and efficiency, i.e. ensuring that the parallel execution obtains the same results as the sequential one and the amount of work performed is not greater. However, such results do not take into account a number of overheads which appear in practice, such as process creation and scheduling, which can induce a slow-down, or, at least, limit speedup, if they are not controlled in some way. This paper describes a methodology whereby the granularity of parallel tasks, i.e. the work available under them, is efficiently estimated and used to limit parallelism so that the effect of such overheads is controlled. The run-time overhead associated with the approach is usually quite small, since as much work is done at compile time as possible. Also,a number of run-time optimizations are proposed. Moreover, a static analysis of the overhead associated with the granularity control process is performed in order to decide its convenience. The performance improvements resulting from the incorporation of grain size control are shown to be quite good, specially for systems with medium to large parallel execution overheads.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The properties of data and activities in business processes can be used to greatly facilítate several relevant tasks performed at design- and run-time, such as fragmentation, compliance checking, or top-down design. Business processes are often described using workflows. We present an approach for mechanically inferring business domain-specific attributes of workflow components (including data Ítems, activities, and elements of sub-workflows), taking as starting point known attributes of workflow inputs and the structure of the workflow. We achieve this by modeling these components as concepts and applying sharing analysis to a Horn clause-based representation of the workflow. The analysis is applicable to workflows featuring complex control and data dependencies, embedded control constructs, such as loops and branches, and embedded component services.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a novel technique for identifying logically related sections of the heap such as recursive data structures, objects that are part of the same multi-component structure, and related groups of objects stored in the same collection/array. When combined withthe lifetime properties of these structures, this information can be used to drive a range of program optimizations including pool allocation, object co-location, static deallocation, and region-based garbage collection. The technique outlined in this paper also improves the efficiency of the static analysis by providing a normal form for the abstract models (speeding the convergence of the static analysis). We focus on two techniques for grouping parts of the heap. The first is a technique for precisely identifying recursive data structures in object-oriented programs based on the types declared in the program. The second technique is a novel method for grouping objects that make up the same composite structure and that allows us to partition the objects stored in a collection/array into groups based on a similarity relation. We provide a parametric component in the similarity relation in order to support specific analysis applications (such as a numeric analysis which would need to partition the objects based on numeric properties of the fields). Using the Barnes-Hut benchmark from the JOlden suite we show how these grouping methods can be used to identify various types of logical structures allowing the application of many region-based program optimizations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nondeterminism and partially instantiated data structures give logic programming expressive power beyond that of functional programming. However, functional programming often provides convenient syntactic features, such as having a designated implicit output argument, which allow function cali nesting and sometimes results in more compact code. Functional programming also sometimes allows a more direct encoding of lazy evaluation, with its ability to deal with infinite data structures. We present a syntactic functional extensión, used in the Ciao system, which can be implemented in ISO-standard Prolog systems and covers function application, predefined evaluable functors, functional definitions, quoting, and lazy evaluation. The extensión is also composable with higher-order features and can be combined with other extensions to ISO-Prolog such as constraints. We also highlight the features of the Ciao system which help implementation and present some data on the overhead of using lazy evaluation with respect to eager evaluation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a general framework for assertion-based debugging of constraint logic programs. Assertions are linguistic constructions for expressing properties of programs. We define several assertion schemas for writing (partial) specifications for constraint logic programs using quite general properties, including user-defined programs. The framework is aimed at detecting deviations of the program behavior (symptoms) with respect to the given assertions, either at compile-time (i.e., statically) or run-time (i.e., dynamically). We provide techniques for using information from global analysis both to detect at compile-time assertions which do not hold in at least one of the possible executions (i.e., static symptoms) and assertions which hold for all possible executions (i.e., statically proved assertions). We also provide program transformations which introduce tests in the program for checking at run-time those assertions whose status cannot be determined at compile-time. Both the static and the dynamic checking are provably safe in the sense that all errors flagged are definite violations of the pecifications. Finally, we report briefly on the currently implemented instances of the generic framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several types of parallelism can be exploited in logic programs while preserving correctness and efficiency, i.e. ensuring that the parallel execution obtains the same results as the sequential one and the amount of work performed is not greater. However, such results do not take into account a number of overheads which appear in practice, such as process creation and scheduling, which can induce a slow-down, or, at least, limit speedup, if they are not controlled in some way. This paper describes a methodology whereby the granularity of parallel tasks, i.e. the work available under them, is efficiently estimated and used to limit parallelism so that the effect of such overheads is controlled. The run-time overhead associated with the approach is usually quite small, since as much work is done at compile time as possible. Also, a number of run-time optimizations are proposed. Moreover, a static analysis of the overhead associated with the granularity control process is performed in order to decide its convenience. The performance improvements resulting from the incorporation of grain size control are shown to be quite good, specially for systems with médium to large parallel execution overheads.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In an increasing number of applications (e.g., in embedded, real-time, or mobile systems) it is important or even essential to ensure conformance with respect to a specification expressing resource usages, such as execution time, memory, energy, or user-defined resources. In previous work we have presented a novel framework for data size-aware, static resource usage verification. Specifications can include both lower and upper bound resource usage functions. In order to statically check such specifications, both upper- and lower-bound resource usage functions (on input data sizes) approximating the actual resource usage of the program which are automatically inferred and compared against the specification. The outcome of the static checking of assertions can express intervals for the input data sizes such that a given specification can be proved for some intervals but disproved for others. After an overview of the approach in this paper we provide a number of novel contributions: we present a full formalization, and we report on and provide results from an implementation within the Ciao/CiaoPP framework (which provides a general, unified platform for static and run-time verification, as well as unit testing). We also generalize the checking of assertions to allow preconditions expressing intervals within which the input data size of a program is supposed to lie (i.e., intervals for which each assertion is applicable), and we extend the class of resource usage functions that can be checked.