969 resultados para Roma (Italia)-Cementerio de San Agnese
Resumo:
An antenna which has been conceived as a portable system for satellite communications based on the recommendations ITU-R S.580-6 and ITU-R S.465-5 for small antennas, i.e., with a diameter lower than 50 wavelengths, is introduced. It is a planar and a compact structure with a size of 40×40×2 cm. The antenna is formed by an array of 256 printed elements covering a large bandwidth (14.7%) at X-Band with a VSWR of 1.4:1. The specification includes transmission (Tx) and reception (Rx) bands simultaneously. The printed antenna has a radiation pattern with a 3dB beamwidth of 5°, over a 31dBi gain, and a dual and an interchangeable circular polarization.
Resumo:
A disruption predictor based on support vector machines (SVM) has been developed to be used in JET. The training process uses thousands of discharges and, therefore, high performance computing has been necessary to obtain the models. To this respect, several models have been generated with data from different JET campaigns. In addition, various kernels (mainly linear and RBF) and parameters have been tested. The main objective of this work has been the implementation of the predictor model under real-time constraints. A “C-code” software application has been developed to simulate the real-time behavior of the predictor. The application reads the signals from the JET database and simulates the real-time data processing, in particular, the specific data hold method to be developed when reading data from the JET ATM real time network. The simulator is fully configurable by means of text files to select models, signal thresholds, sampling rates, etc. Results with data between campaigns C23and C28 will be shown.
Resumo:
The boundary element method (BEM) has been applied successfully to many engineering problems during the last decades. Compared with domain type methods like the finite element method (FEM) or the finite difference method (FDM) the BEM can handle problems where the medium extends to infinity much easier than domain type methods as there is no need to develop special boundary conditions (quiet or absorbing boundaries) or infinite elements at the boundaries introduced to limit the domain studied. The determination of the dynamic stiffness of arbitrarily shaped footings is just one of these fields where the BEM has been the method of choice, especially in the 1980s. With the continuous development of computer technology and the available hardware equipment the size of the problems under study grew and, as the flop count for solving the resulting linear system of equations grows with the third power of the number of equations, there was a need for the development of iterative methods with better performance. In [1] the GMRES algorithm was presented which is now widely used for implementations of the collocation BEM. While the FEM results in sparsely populated coefficient matrices, the BEM leads, in general, to fully or densely populated ones, depending on the number of subregions, posing a serious memory problem even for todays computers. If the geometry of the problem permits the surface of the domain to be meshed with equally shaped elements a lot of the resulting coefficients will be calculated and stored repeatedly. The present paper shows how these unnecessary operations can be avoided reducing the calculation time as well as the storage requirement. To this end a similar coefficient identification algorithm (SCIA), has been developed and implemented in a program written in Fortran 90. The vertical dynamic stiffness of a single pile in layered soil has been chosen to test the performance of the implementation. The results obtained with the 3-d model may be compared with those obtained with an axisymmetric formulation which are considered to be the reference values as the mesh quality is much better. The entire 3D model comprises more than 35000 dofs being a soil region with 21168 dofs the biggest single region. Note that the memory necessary to store all coefficients of this single region is about 6.8 GB, an amount which is usually not available with personal computers. In the problem under study the interface zone between the two adjacent soil regions as well as the surface of the top layer may be meshed with equally sized elements. In this case the application of the SCIA leads to an important reduction in memory requirements. The maximum memory used during the calculation has been reduced to 1.2 GB. The application of the SCIA thus permits problems to be solved on personal computers which otherwise would require much more powerful hardware.
Resumo:
A system for estimation of unknown rectangular room dimensions based on two radio transceivers, both capable of full duplex operations, is presented. The approach is based on CIR measurements taken at the same place where the signal is transmitted (generated), commonly known as self- to-self CIR. Another novelty is the receiver antenna design which consists of eight sectorized antennas with 45° aperture in the horizontal plane, whose total coverage corresponds to the isotropic one. The dimensions of a rectangular room are reconstructed directly from radio impulse responses by extracting the information regarding features like round trip time, received signal strength and reverberation time. Using radar approach the estimation of walls and corners positions are derived. Additionally, the analysis of the absorption coefficient of the test environment is conducted and a typical coefficient for office room with furniture is proposed. Its accuracy is confirmed through the results of volume estimation. Tests using measured data were performed, and the simulation results confirm the feasibility of the approach.
Resumo:
A basic requirement of the data acquisition systems used in long pulse fusion experiments is the real time physical events detection in signals. Developing such applications is usually a complex task, so it is necessary to develop a set of hardware and software tools that simplify their implementation. This type of applications can be implemented in ITER using fast controllers. ITER is standardizing the architectures to be used for fast controller implementation. Until now the standards chosen are PXIe architectures (based on PCIe) for the hardware and EPICS middleware for the software. This work presents the methodology for implementing data acquisition and pre-processing using FPGA-based DAQ cards and how to integrate these in fast controllers using EPICS.
Resumo:
Sampling a network with a given probability distribution has been identified as a useful operation. In this paper we propose distributed algorithms for sampling networks, so that nodes are selected by a special node, called the source, with a given probability distribution. All these algorithms are based on a new class of random walks, that we call Random Centrifugal Walks (RCW). A RCW is a random walk that starts at the source and always moves away from it. Firstly, an algorithm to sample any connected network using RCW is proposed. The algorithm assumes that each node has a weight, so that the sampling process must select a node with a probability proportional to its weight. This algorithm requires a preprocessing phase before the sampling of nodes. In particular, a minimum diameter spanning tree (MDST) is created in the network, and then nodes weights are efficiently aggregated using the tree. The good news are that the preprocessing is done only once, regardless of the number of sources and the number of samples taken from the network. After that, every sample is done with a RCW whose length is bounded by the network diameter. Secondly, RCW algorithms that do not require preprocessing are proposed for grids and networks with regular concentric connectivity, for the case when the probability of selecting a node is a function of its distance to the source. The key features of the RCW algorithms (unlike previous Markovian approaches) are that (1) they do not need to warm-up (stabilize), (2) the sampling always finishes in a number of hops bounded by the network diameter, and (3) it selects a node with the exact probability distribution.
Resumo:
Análisis de los principales factores de cambio que previsiblemente incidirán en los destinos turísticos de sol y playa en un escenario de bajo crecimiento.
Resumo:
One of the fundamental aspects in the adaptation of the teaching to the European higher education is changing based models of teacher education to models based on student learning. In this work we present an educational experience developed with the teaching method based on the case method, with a clearly multidisciplinary. The experience has been developed in the teaching of analysis and verification of safety rails. This is a multidisciplinary field that presents great difficulties during their teaching. The use of the case method has given good results in the competences achieved by students
Resumo:
The technique of reinforcement of wooden floors is a matter clearly multidisciplinary. The teaching of the subject using the "traditional" method, explaining the theory first and then proposing and solving problems has not been successful. This paper discusses the results of a teaching experiencie. It has been the teaching of the subject by the case method. The results are clearly superior to those obtained with the traditional methodology.
Resumo:
This paper analyzes issues which appear when supporting pruning operators in tabled LP. A version of the once/1 control predicate tailored for tabled predicates is presented, and an implementation analyzed and evaluated. Using once/1 with answer-on-demand strategies makes it possible to avoid computing unneeded solutions for problems which can benefit from tabled LP but in which only a single solution is needed, such as model checking and planning. The proposed version of once/1 is also directly applicable to the efficient implementation of other optimizations, such as early completion, cut-fail loops (to, e.g., prune at the top level), if-then-else, and constraint-based branch-and-bound optimization. Although once/1 still presents open issues such as dependencies of tabled solutions on program history, our experimental evaluation confirms that it provides an arbitrarily large efficiency improvement in several application areas.
Resumo:
Sign.: A-C4
Resumo:
Port. con grab. calc
Resumo:
Olivier Danvy and others have shown the syntactic correspondence between reduction semantics (a small-step semantics) and abstract machines, as well as the functional correspondence between reduction-free normalisers (a big-step semantics) and abstract machines. The correspondences are established by program transformation (so-called interderivation) techniques. A reduction semantics and a reduction-free normaliser are interderivable when the abstract machine obtained from them is the same. However, the correspondences fail when the underlying reduction strategy is hybrid, i.e., relies on another sub-strategy. Hybridisation is an essential structural property of full-reducing and complete strategies. Hybridisation is unproblematic in the functional correspondence. But in the syntactic correspondence the refocusing and inlining-of-iterate-function steps become context sensitive, preventing the refunctionalisation of the abstract machine. We show how to solve the problem and showcase the interderivation of normalisers for normal order, the standard, full-reducing and complete strategy of the pure lambda calculus. Our solution makes it possible to interderive, rather than contrive, full-reducing abstract machines. As expected, the machine we obtain is a variant of Pierre Crégut s full Krivine machine KN.
Resumo:
Graph automorphism (GA) is a classical problem, in which the objective is to compute the automorphism group of an input graph. In this work we propose four novel techniques to speed up algorithms that solve the GA problem by exploring a search tree. They increase the performance of the algorithm by allowing to reduce the depth of the search tree, and by effectively pruning it. We formally prove that a GA algorithm that uses these techniques correctly computes the automorphism group of the input graph. We also describe how the techniques have been incorporated into the GA algorithm conauto, as conauto-2.03, with at most an additive polynomial increase in its asymptotic time complexity. We have experimentally evaluated the impact of each of the above techniques with several graph families. We have observed that each of the techniques by itself significantly reduces the number of processed nodes of the search tree in some subset of graphs, which justifies the use of each of them. Then, when they are applied together, their effect is combined, leading to reductions in the number of processed nodes in most graphs. This is also reflected in a reduction of the running time, which is substantial in some graph families.
Resumo:
La tesis tiene por objeto el estudio de los usos y funciones que tuvo el palacio del Buen Retiro durante los siglos XVII y XVIII, y cuál fue el proceso de formación, disposición y dispersión de las colecciones de obras de arte que alhajaban sus estancias, y en especial de las pinturas. Para llevarlo a cabo, hemos revisado la amplia bibliografía disponible, y consultado numerosas fuentes relativas al real sitio, a los artistas que trabajaron en él o realizaron obras que formaron parte de su decoración, y a la visión que tuvieron del Retiro los representantes diplomáticos y los viajeros a lo largo de los siglos XVII y XVIII. De este modo, hemos creado un sólido corpus documental que nos ha permitido reconstruir la serie de inventarios que se realizaron de las obras de arte que decoraban el Retiro, y también fechar las distintas marcas que fueron recibiendo las pinturas. La combinación de ambos datos ha resultado ser de enorme importancia para poder individualizar los principales espacios del palacio, los programas decorativos que albergó, y localizar con ciertas garantías buena parte de las obras de arte pertenecientes al Retiro. El amplio capítulo dedicado al siglo XVII, indispensable para poder comprender qué usos tenía el real sitio y verificar cuáles fueron los cambios que experimentó la colección a lo largo del siglo XVIII, ha permitido comprobar y documentar cómo, con la decisión de ampliar el cuarto real de San Jerónimo en 1632, se puso en marcha uno de los proyectos más interesantes de la historia del coleccionismo europeo del siglo XVII. Gracias a distintos documentos inéditos hemos podido precisar cómo y cuándo se produjeron los encargos destinados a alhajar el palacio, entre los que destacan las series de pinturas de paisajes con eremitas, la de escenas de la Antigua Roma, y la de san Juan Bautista...