58 resultados para Linear Codes over Finite Fields
em Universidad Politécnica de Madrid
Resumo:
We present here an information reconciliation method and demonstrate for the first time that it can achieve efficiencies close to 0.98. This method is based on the belief propagation decoding of non-binary LDPC codes over finite (Galois) fields. In particular, for convenience and faster decoding we only consider power-of-two Galois fields.
Resumo:
Remote sensing (RS) with aerial robots is becoming more usual in every day time in Precision Agriculture (PA) practices, do to their advantages over conventional methods. Usually, available commercial platforms providing off-the-shelf waypoint navigation are adopted to perform visual surveys over crop fields, with the purpose to acquire specific image samples. The way in which a waypoint list is computed and dispatched to the aerial robot when mapping non empty agricultural workspaces has not been yet discussed. In this paper we propose an offline mission planner approach that computes an efficient coverage path subject to some constraints by decomposing the environment approximately into cells. Therefore, the aim of this work is contributing with a feasible waypoints-based tool to support PA practices
Resumo:
We present a new free library for Constraint Logic Programming over Finite Domains, included with the Ciao Prolog system. The library is entirely written in Prolog, leveraging on Ciao's module system and code transformation capabilities in order to achieve a highly modular design without compromising performance. We describe the interface, implementation, and design rationale of each modular component. The library meets several design goals: a high level of modularity, allowing the individual components to be replaced by different versions; highefficiency, being competitive with other TT> implementations; a glass-box approach, so the user can specify new constraints at different levels; and a Prolog implementation, in order to ease the integration with Ciao's code analysis components. The core is built upon two small libraries which implement integer ranges and closures. On top of that, a finite domain variable datatype is defined, taking care of constraint reexecution depending on range changes. These three libraries form what we call the TT> kernel of the library. This TT> kernel is used in turn to implement several higher-level finite domain constraints, specified using indexicals. Together with a labeling module this layer forms what we name the TT> solver. A final level integrates the CLP (J7©) paradigm with our TT> solver. This is achieved using attributed variables and a compiler from the CLP (J7©) language to the set of constraints provided by the solver. It should be noted that the user of the library is encouraged to work in any of those levels as seen convenient: from writing a new range module to enriching the set of TT> constraints by writing new indexicals.
Resumo:
Dislocation mobility —the relation between applied stress and dislocation velocity—is an important property to model the mechanical behavior of structural materials. These mobilities reflect the interaction between the dislocation core and the host lattice and, thus, atomistic resolution is required to capture its details. Because the mobility function is multiparametric, its computation is often highly demanding in terms of computational requirements. Optimizing how tractions are applied can be greatly advantageous in accelerating convergence and reducing the overall computational cost of the simulations. In this paper we perform molecular dynamics simulations of ½ 〈1 1 1〉 screw dislocation motion in tungsten using step and linear time functions for applying external stress. We find that linear functions over time scales of the order of 10–20 ps reduce fluctuations and speed up convergence to the steady-state velocity value by up to a factor of two.
Resumo:
Esta investigación se centra en el estudio de la dimensión audiovisual de la arquitectura, como aproximación intersensorial a la aprehensión e ideación del espacio. Poniendo en evidencia la complejidad de la relación hombre-medio, se plantea la necesidad de desarrollar nuevas metodologías y herramientas que tengan en cuenta dicha complejidad y que favorezcan el desarrollo del proyecto. Nos mueve en esta investigación la convicción de que los cambios rápidos y profundos que caracterizan nuestros tiempos en todos los ámbitos, social, económico, político… entrañan inevita-blemente nuevos modos de conocimiento y experimentación del espacio, y por tanto nuevos ejes de investigación. La creciente valoración, en todos los campos del conocimiento, de los aspectos subjetivos y sensoriales, el desarrollo de las tecnologías que ha cambiado completamente nuestras relaciones interpersonales y con el entorno, las nuevas capacidades de análisis, grabación y conservación y manipulación de datos y por ultimo, aunque no menos importante, la puesta a disposición democrá¬tica y global de todo el saber a través de Internet, imponen otra aproximación al hacer, concebir y vivir la arquitectura. Esta investigación se centra en un análisis crítico del estado de la cuestión, construyendo nue¬vas redes de relación entre disciplinas, que permitan plantear la dimensión audiovisual como un nuevo eje de investigación dentro de la arquitectura, poniendo en evidencia la necesidad de desa¬rrollar análisis de forma trasversal e interdisciplinar. Hemos prestado particular atención a la evolución de lo sonoro y su aproximación cualitativa a la arquitectura, mostrando como el sonido, con su capacidad de introducir el tiempo y los aspectos dinámicos (el movimiento, la presencia del cuerpo…), no es simplemente otro canal sensorial en la aprehensión del espacio, ya que su interacción con lo visual genera un espacio-tiempo indisociable, propio, característico de cada momento y lugar. A partir de este planteamiento se ha hecho una revisión metodológica dirigida a utilizar el reco¬rrido como herramienta de análisis, que permita estudiar la relación entre el espacio, la acción y la percepción audio-visual, cruzando para ello los datos correspondientes a la morfología del espacio, con los datos de la experiencia perceptiva individual y con los de los usos colectivos del espacio, utilizándose finalmente el video como un herramienta, no sólo de representación de lo real, sino también como instrumento de análisis, que permite tomar datos (grabaciones audio, video, obser¬vaciones…), aislarlos, estudiarlos, clasificarlos, ordenarlos, y finalmente, restituirlos mediante el montaje. Se ha realizado una primera experimentación “in situ” que ha servido para explorar la aplicación del método, planteando nuevas preguntas y abriendo líneas de análisis para ulteriores investigacio¬nes. ABSTRACT This research is focused on the study of the audiovisual dimension of architecture, as an in¬tersensorial approach to space apprehension and design. It is posed the necessity to develop new methodologies and tools that keep this complexity, as a contribution to the development of a project, by means of putting into evidence the sophistication of the relationship between man and media The research moves us to the conviction that the quick and relevant changes that confer a distinc-tion to these contemporary times all over the social, economic and political environments, involve, unavoidably, new ways of knowledge and experimentation on space, and therefore, new trends of research. The growing valuation of subjective and sensorial aspects all over the fields of the knowledge and the development of the technologies that have changed completely our interpersonal and environmental relationships, the new tools for analysis, recording, conservation and manipulation of data and, last but not least, the setting to democratic and global availability of the whole knowledge through Inter¬net, impose another approach to the making, conception and experience of architecture. This research deals with a critical analysis of the state–of- the-art of the matter, modelling new webs of relationship among disciplines that allow to outline the audiovisual dimension as a new focus of research on architecture, putting evidence into practice as it is necessary to develop any analysis in a transversal and interdisciplinary way. It is paid a special attention to the evolution of sound objects and their qualitative approach to ar¬chitecture, showing how sound, with its capacity to transmit time and dynamic aspects of things (movement, the presence of the body), it is not simply another sensorial channel in the apprehension of space, since its interaction with the visual thing generates an undetachable association of space and time, an specific one of every moment and place. Starting from this position a methodological revision has been made leading to use a walk as a tool for analysis that allows to study the relationship among the space, the action and the audio-visual perception, by means of crossing data corresponding to the morphology of space, with the data of a perceptive experience from the perspective of an individual observer and with those of the collective uses of the space, as video has been finally used as a tool, not only as a representation of the real thing, but also as a tool for analysis that allows to take isolated data (audio recordings, video, obser¬vations), to be studied, classified, and put into their appropriate place, and finally, to restore them by means of a multimedia set up. A first experimentation in situ has been carried out, being useful to explore a method of appli¬cation, outlining new questions and beginning with new ways of analysis for further research.
Resumo:
The behaviour of the interface between the FRP and the concrete is the key factor controlling debonding failures in FRP-strengthened RC structures. This defect can cause reductions in static strength, structural integrity and the change in the dynamic behavior of the structure. The adverse effect on the dynamic behavior of the defects can be utilized as an effective means for identifying and assessing both the location and size of debonding at its earliest stages. The presence of debonding changes the structural dynamic characteristics and might be traced in modal parameters, dynamic strain and wave patterns etc. Detection of minor local defects, as those origin of a future debonding, requires working at high frequencies so that the wavelength of the excited is small and sensitive enough to detect local damage. The development of a spectral element method gives a large potential in high-frequency structural modeling. In contrast to the conventional finite element, since inertial properties are modeled exactly few elements are necessary to capture very accurate solutions at the highest frequencies in large regions. A wide variety of spectral elements have been developed for structural members over finite and semi-infinite regions. The objective of this paper is to develop a Spectral Finite Element Model to efficiently capture the behavior of intermediate debonding of a FRP strengthened RC beam during wave-based diagnostics.
Resumo:
Logic programming (LP) is a family of high-level programming languages which provides high expressive power. With LP, the programmer writes the properties of the result and / or executable specifications instead of detailed computation steps. Logic programming systems which feature tabled execution and constraint logic programming have been shown to increase the declarativeness and efficiency of Prolog, while at the same time making it possible to write very expressive programs. Tabled execution avoids infinite failure in some cases, while improving efficiency in programs which repeat computations. CLP reduces the search tree and brings the power of solving (in)equations over arbitrary domains. Similarly to the LP case, CLP systems can also benefit from the power of tabling. Previous implementations which take ful advantage of the ideas behind tabling (e.g., forcing suspension, answer subsumption, etc. wherever it is necessary to avoid recomputation and terminate whenever possible) did not offer a simple, well-documented, easy-to-understand interface. This would be necessary to make the integratation of arbitrary CLP solvers into existing tabling systems possible. This clearly hinders a more widespread usage of the combination of both facilities. In this thesis we examine the requirements that a constraint solver must fulfill in order to be interfaced with a tabling system. We propose and implement a framework, which we have called Mod TCLP, with a minimal set of operations (e.g., entailment checking and projection) which the constraint solver has to provide to the tabling engine. We validate the design of Mod TCLP by a series of use cases: we re-engineer a previously existing tabled constrain domain (difference constraints) which was connected in an ad-hoc manner with the tabling engine in Ciao Prolog; we integrateHolzbauer’s CLP(Q) implementationwith Ciao Prolog’s tabling engine; and we implement a constraint solver over (finite) lattices. We evaluate its performance with several benchmarks that implement a simple abstract interpreter whose fixpoint is reached by means of tabled execution, and whose domain operations are handled by the constraint over (finite) lattices, where TCLP avoids recomputing subsumed abstractions.---ABSTRACT---La programación lógica con restricciones (CLP) y la tabulación son extensiones de la programación lógica que incrementan la declaratividad y eficiencia de Prolog, al mismo tiempo que hacen posible escribir programasmás expresivos. Las implementaciones anteriores que integran completamente ambas extensiones, incluyendo la suspensión de la ejecución de objetivos siempre que sea necesario, la implementación de inclusión (subsumption) de respuestas, etc., en todos los puntos en los que sea necesario para evitar recomputaciones y garantizar la terminación cuando sea posible, no han proporcionan una interfaz simple, bien documentada y fácil de entender. Esta interfaz es necesaria para permitir integrar resolutores de CLP arbitrarios en el sistema de tabulación. Esto claramente dificulta un uso más generalizado de la integración de ambas extensiones. En esta tesis examinamos los requisitos que un resolutor de restricciones debe cumplir para ser integrado con un sistema de tabulación. Proponemos un esquema (y su implementación), que hemos llamadoMod TCLP, que requiere un reducido conjunto de operaciones (en particular, y entre otras, entailment y proyección de almacenes de restricciones) que el resolutor de restricciones debe ofrecer al sistema de tabulación. Hemos validado el diseño de Mod TCLP con una serie de casos de uso: la refactorización de un sistema de restricciones (difference constraints) previamente conectado de un modo ad-hoc con la tabulación de Ciao Prolog; la integración del sistema de restricciones CLP(Q) de Holzbauer; y la implementación de un resolutor de restricciones sobre retículos finitos. Hemos evaluado su rendimiento con varios programas de prueba, incluyendo la implementación de un intérprete abstracto que alcanza su punto fijo mediante el sistema de tabulación y en el que las operaciones en el dominio son realizadas por el resolutor de restricciones sobre retículos (finitos) donde TCLP evita la recomputación de valores abstractos de las variables ya contenidos en llamadas anteriores.
Linear global instability of non-orthogonal incompressible swept attachment-line boundary layer flow
Resumo:
Instability of the orthogonal swept attachment line boundary layer has received attention by local1, 2 and global3–5 analysis methods over several decades, owing to the significance of this model to transition to turbulence on the surface of swept wings. However, substantially less attention has been paid to the problem of laminar flow instability in the non-orthogonal swept attachment-line boundary layer; only a local analysis framework has been employed to-date.6 The present contribution addresses this issue from a linear global (BiGlobal) instability analysis point of view in the incompressible regime. Direct numerical simulations have also been performed in order to verify the analysis results and unravel the limits of validity of the Dorrepaal basic flow7 model analyzed. Cross-validated results document the effect of the angle _ on the critical conditions identified by Hall et al.1 and show linear destabilization of the flow with decreasing AoA, up to a limit at which the assumptions of the Dorrepaal model become questionable. Finally, a simple extension of the extended G¨ortler-H¨ammerlin ODE-based polynomial model proposed by Theofilis et al.4 is presented for the non-orthogonal flow. In this model, the symmetries of the three-dimensional disturbances are broken by the non-orthogonal flow conditions. Temporal and spatial one-dimensional linear eigenvalue codes were developed, obtaining consistent results with BiGlobal stability analysis and DNS. Beyond the computational advantages presented by the ODE-based model, it allows us to understand the functional dependence of the three-dimensional disturbances in the non-orthogonal case as well as their connections with the disturbances of the orthogonal stability problem.
Resumo:
The development of a global instability analysis code coupling a time-stepping approach, as applied to the solution of BiGlobal and TriGlobal instability analysis 1, 2 and finite-volume-based spatial discretization, as used in standard aerodynamics codes is presented. The key advantage of the time-stepping method over matrix-formulation approaches is that the former provides a solution to the computer-storage issues associated with the latter methodology. To-date both approaches are successfully in use to analyze instability in complex geometries, although their relative advantages have never been quantified. The ultimate goal of the present work is to address this issue in the context of spatial discretization schemes typically used in industry. The time-stepping approach of Chiba 3 has been implemented in conjunction with two direct numerical simulation algorithms, one based on the typically-used in this context high-order method and another based on low-order methods representative of those in common use in industry. The two codes have been validated with solutions of the BiGlobal EVP and it has been showed that small errors in the base flow do not have affect significantly the results. As a result, a three-dimensional compressible unsteady second-order code for global linear stability has been successfully developed based on finite-volume spatial discretization and time-stepping method with the ability to study complex geometries by means of unstructured and hybrid meshes
Resumo:
When an automobile passes over a bridge dynamic effects are produced in vehicle and structure. In addition, the bridge itself moves when exposed to the wind inducing dynamic effects on the vehicle that have to be considered. The main objective of this work is to understand the influence of the different parameters concerning the vehicle, the bridge, the road roughness or the wind in the comfort and safety of the vehicles when crossing bridges. Non linear finite element models are used for structures and multibody dynamic models are employed for vehicles. The interaction between the vehicle and the bridge is considered by contact methods. Road roughness is described by the power spectral density (PSD) proposed by the ISO 8608. To consider that the profiles under right and left wheels are different but not independent, the hypotheses of homogeneity and isotropy are assumed. To generate the wind velocity history along the road the Sandia method is employed. The global problem is solved by means of the finite element method. First the methodology for modelling the interaction is verified in a benchmark. Following, the case of a vehicle running along a rigid road and subjected to the action of the turbulent wind is analyzed and the road roughness is incorporated in a following step. Finally the flexibility of the bridge is added to the model by making the vehicle run over the structure. The application of this methodology will allow to understand the influence of the different parameters in the comfort and safety of road vehicles crossing wind exposed bridges. Those results will help to recommend measures to make the traffic over bridges more reliable without affecting the structural integrity of the viaduct
Resumo:
Global linear instability theory is concerned with the temporal or spatial development of small-amplitude perturbations superposed upon laminar steady or time-periodic threedimensional flows, which are inhomogeneous in two (and periodic in one) or all three spatial directions.1 The theory addresses flows developing in complex geometries, in which the parallel or weakly nonparallel basic flow approximation invoked by classic linear stability theory does not hold. As such, global linear theory is called to fill the gap in research into stability and transition in flows over or through complex geometries. Historically, global linear instability has been (and still is) concerned with solution of multi-dimensional eigenvalue problems; the maturing of non-modal linear instability ideas in simple parallel flows during the last decade of last century2–4 has given rise to investigation of transient growth scenarios in an ever increasing variety of complex flows. After a brief exposition of the theory, connections are sought with established approaches for structure identification in flows, such as the proper orthogonal decomposition and topology theory in the laminar regime and the open areas for future research, mainly concerning turbulent and three-dimensional flows, are highlighted. Recent results obtained in our group are reported in both the time-stepping and the matrix-forming approaches to global linear theory. In the first context, progress has been made in implementing a Jacobian-Free Newton Krylov method into a standard finite-volume aerodynamic code, such that global linear instability results may now be obtained in compressible flows of aeronautical interest. In the second context a new stable very high-order finite difference method is implemented for the spatial discretization of the operators describing the spatial BiGlobal EVP, PSE-3D and the TriGlobal EVP; combined with sparse matrix treatment, all these problems may now be solved on standard desktop computers.
Resumo:
The great developments that have occurred during the last few years in the finite element method and its applications has kept hidden other options for computation. The boundary integral element method now appears as a valid alternative and, in certain cases, has significant advantages. This method deals only with the boundary of the domain, while the F.E.M. analyses the whole domain. This has the following advantages: the dimensions of the problem to be studied are reduced by one, consequently simplifying the system of equations and preparation of input data. It is also possible to analyse infinite domains without discretization errors. These simplifications have the drawbacks of having to solve a full and non-symmetric matrix and some difficulties are incurred in the imposition of boundary conditions when complicated variations of the function over the boundary are assumed. In this paper a practical treatment of these problems, in particular boundary conditions imposition, has been carried out using the computer program shown below. Program SERBA solves general elastostatics problems in 2-dimensional continua using the boundary integral equation method. The boundary of the domain is discretized by line or elements over which the functions are assumed to vary linearly. Data (stresses and/or displacements) are introduced in the local co-ordinate system (element co-ordinates). Resulting stresses are obtained in local co-ordinates and displacements in a general system. The program has been written in Fortran ASCII and implemented on a 1108 Univac Computer. For 100 elements the core requirements are about 40 Kwords. Also available is a Fortran IV version (3 segments)implemented on a 21 MX Hewlett-Packard computer,using 15 Kwords.
Resumo:
Abstract. This paper describes a new and original method for designing oscillators based on the Normalized Determinant Function (NDF) and Return Relations (RRT)- Firstly, a review of the loop-gain method will be performed. The loop-gain method pros, cons and some examples for exploring wrong solutions provided by this method will be shown. This method produces in some cases wrong solutions because some necessary conditions have not been fulfilled. The required necessary conditions to assure a right solution will be described. The necessity of using the NDF or the Transpose Return Relations (RRT), which are related with the True Loop-Gain, to test the additional conditions will be demonstrated. To conclude this paper, the steps for oscillator design and analysis, using the proposed NDF/RRj method, will be presented. The loop-gain wrong solutions will be compared with the NDF/RRj and the accuracy of this method to estimate the oscillation frequency and QL will be demonstrated. Some additional examples of plane reference oscillators (Z/Y/T), will be added and they will be analyzed with the new NDF/RRj proposed method, even these oscillators cannot be analyzed using the classic loop gain method.
Resumo:
Swift heavy ion irradiation (ions with mass heavier than 15 and energy exceeding MeV/amu) transfer their energy mainly to the electronic system with small momentum transfer per collision. Therefore, they produce linear regions (columnar nano-tracks) around the straight ion trajectory, with marked modifications with respect to the virgin material, e.g., phase transition, amorphization, compaction, changes in physical or chemical properties. In the case of crystalline materials the most distinctive feature of swift heavy ion irradiation is the production of amorphous tracks embedded in the crystal. Lithium niobate is a relevant optical material that presents birefringence due to its anysotropic trigonal structure. The amorphous phase is certainly isotropic. In addition, its refractive index exhibits high contrast with those of the crystalline phase. This allows one to fabricate waveguides by swift ion irradiation with important technological relevance. From the mechanical point of view, the inclusion of an amorphous nano-track (with a density 15% lower than that of the crystal) leads to the generation of important stress/strain fields around the track. Eventually these fields are the origin of crack formation with fatal consequences for the integrity of the samples and the viability of the method for nano-track formation. For certain crystal cuts (X and Y), these fields are clearly anisotropic due to the crystal anisotropy. We have used finite element methods to calculate the stress/strain fields that appear around the ion-generated amorphous nano-tracks for a variety of ion energies and doses. A very remarkable feature for X cut-samples is that the maximum shear stress appears on preferential planes that form +/-45º with respect to the crystallographic planes. This leads to the generation of oriented surface cracks when the dose increases. The growth of the cracks along the anisotropic crystal has been studied by means of novel extended finite element methods, which include cracks as discontinuities. In this way we can study how the length and depth of a crack evolves as function of the ion dose. In this work we will show how the simulations compare with experiments and their application in materials modification by ion irradiation.
Resumo:
Assessing wind conditions on complex terrain has become a hard task as terrain complexity increases. That is why there is a need to extrapolate in a reliable manner some wind parameters that determine wind farms viability such as annual average wind speed at all hub heights as well as turbulence intensities. The development of these tasks began in the early 90´s with the widely used linear model WAsP and WAsP Engineering especially designed for simple terrain with remarkable results on them but not so good on complex orographies. Simultaneously non-linearized Navier Stokes solvers have been rapidly developed in the last decade through CFD (Computational Fluid Dynamics) codes allowing simulating atmospheric boundary layer flows over steep complex terrain more accurately reducing uncertainties. This paper describes the features of these models by validating them through meteorological masts installed in a highly complex terrain. The study compares the results of the mentioned models in terms of wind speed and turbulence intensity.