869 resultados para Error-correcting codes (Information theory)


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Most electronic systems can be described in a very simplified way as an assemblage of analog and digital components put all together in order to perform a certain function. Nowadays, there is an increasing tendency to reduce the analog components, and to replace them by operations performed in the digital domain. This tendency has led to the emergence of new electronic systems that are more flexible, cheaper and robust. However, no matter the amount of digital process implemented, there will be always an analog part to be sorted out and thus, the step of converting digital signals into analog signals and vice versa cannot be avoided. This conversion can be more or less complex depending on the characteristics of the signals. Thus, even if it is desirable to replace functions carried out by analog components by digital processes, it is equally important to do so in a way that simplifies the conversion from digital to analog signals and vice versa. In the present thesis, we have study strategies based on increasing the amount of processing in the digital domain in such a way that the implementation of analog hardware stages can be simplified. To this aim, we have proposed the use of very low quantized signals, i.e. 1-bit, for the acquisition and for the generation of particular classes of signals.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

I present a new experimental method called Total Internal Reflection Fluorescence Cross-Correlation Spectroscopy (TIR-FCCS). It is a method that can probe hydrodynamic flows near solid surfaces, on length scales of tens of nanometres. Fluorescent tracers flowing with the liquid are excited by evanescent light, produced by epi-illumination through the periphery of a high NA oil-immersion objective. Due to the fast decay of the evanescent wave, fluorescence only occurs for tracers in the ~100 nm proximity of the surface, thus resulting in very high normal resolution. The time-resolved fluorescence intensity signals from two laterally shifted (in flow direction) observation volumes, created by two confocal pinholes are independently measured and recorded. The cross-correlation of these signals provides important information for the tracers’ motion and thus their flow velocity. Due to the high sensitivity of the method, fluorescent species with different size, down to single dye molecules can be used as tracers. The aim of my work was to build an experimental setup for TIR-FCCS and use it to experimentally measure the shear rate and slip length of water flowing on hydrophilic and hydrophobic surfaces. However, in order to extract these parameters from the measured correlation curves a quantitative data analysis is needed. This is not straightforward task due to the complexity of the problem, which makes the derivation of analytical expressions for the correlation functions needed to fit the experimental data, impossible. Therefore in order to process and interpret the experimental results I also describe a new numerical method of data analysis of the acquired auto- and cross-correlation curves – Brownian Dynamics techniques are used to produce simulated auto- and cross-correlation functions and to fit the corresponding experimental data. I show how to combine detailed and fairly realistic theoretical modelling of the phenomena with accurate measurements of the correlation functions, in order to establish a fully quantitative method to retrieve the flow properties from the experiments. An importance-sampling Monte Carlo procedure is employed in order to fit the experiments. This provides the optimum parameter values together with their statistical error bars. The approach is well suited for both modern desktop PC machines and massively parallel computers. The latter allows making the data analysis within short computing times. I applied this method to study flow of aqueous electrolyte solution near smooth hydrophilic and hydrophobic surfaces. Generally on hydrophilic surface slip is not expected, while on hydrophobic surface some slippage may exists. Our results show that on both hydrophilic and moderately hydrophobic (contact angle ~85°) surfaces the slip length is ~10-15nm or lower, and within the limitations of the experiments and the model, indistinguishable from zero.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

I Polar Codes sono la prima classe di codici a correzione d’errore di cui è stato dimostrato il raggiungimento della capacità per ogni canale simmetrico, discreto e senza memoria, grazie ad un nuovo metodo introdotto recentemente, chiamato ”Channel Polarization”. In questa tesi verranno descritti in dettaglio i principali algoritmi di codifica e decodifica. In particolare verranno confrontate le prestazioni dei simulatori sviluppati per il ”Successive Cancellation Decoder” e per il ”Successive Cancellation List Decoder” rispetto ai risultati riportati in letteratura. Al fine di migliorare la distanza minima e di conseguenza le prestazioni, utilizzeremo uno schema concatenato con il polar code come codice interno ed un CRC come codice esterno. Proporremo inoltre una nuova tecnica per analizzare la channel polarization nel caso di trasmissione su canale AWGN che risulta il modello statistico più appropriato per le comunicazioni satellitari e nelle applicazioni deep space. In aggiunta, investigheremo l’importanza di una accurata approssimazione delle funzioni di polarizzazione.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

According to Bandura (1997) efficacy beliefs are a primary determinant of motivation. Still, very little is known about the processes through which people integrate situational factors to form efficacy beliefs (Myers & Feltz, 2007). The aim of this study was to gain insight into the cognitive construction of subjective group-efficacy beliefs. Only with a sound understanding of those processes is there a sufficient base to derive psychological interventions aimed at group-efficacy beliefs. According to cognitive theories (e.g., Miller, Galanter, & Pribram, 1973) individual group-efficacy beliefs can be seen as the result of a comparison between the demands of a group task and the resources of the performing group. At the center of this comparison are internally represented structures of the group task and plans to perform it. The empirical plausibility of this notion was tested using functional measurement theory (Anderson, 1981). Twenty-three students (M = 23.30 years; SD = 3.39; 35 % females) of the University of Bern repeatedly judged the efficacy of groups in different group tasks. The groups consisted of the subjects and another one to two fictive group members. The latter were manipulated by their value (low, medium, high) in task-relevant abilities. Data obtained from multiple full factorial designs were structured with individuals as second level units and analyzed using mixed linear models. The task-relevant abilities of group members, specified as fixed factors, all had highly significant effects on subjects’ group-efficacy judgments. The effect sizes of the ability factors showed to be dependent on the respective abilities’ importance in a given task. In additive tasks (Steiner, 1972) group resources were integrated in a linear fashion whereas significant interaction between factors was obtained in interdependent tasks. The results also showed that people take into account other group members’ efficacy beliefs when forming their own group-efficacy beliefs. The results support the notion that personal group-efficacy beliefs are obtained by comparing the demands of a task with the performing groups’ resources. Psychological factors such as other team members’ efficacy beliefs are thereby being considered task relevant resources and affect subjective group-efficacy beliefs. This latter finding underlines the adequacy of multidimensional measures. While the validity of collective efficacy measures is usually estimated by how well they predict performances, the results of this study allow for a somewhat internal validity criterion. It is concluded that Information Integration Theory holds potential to further help understand people’s cognitive functioning in sport relevant situations.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The authors are from UPM and are relatively grouped, and all have intervened in different academic or real cases on the subject, at different times as being of different age. With precedent from E. Torroja and A. Páez in Madrid Spain Safety Probabilistic models for concrete about 1957, now in ICOSSAR conferences, author J.M. Antón involved since autumn 1967 for euro-steel construction in CECM produced a math model for independent load superposition reductions, and using it a load coefficient pattern for codes in Rome Feb. 1969, practically adopted for European constructions, giving in JCSS Lisbon Feb. 1974 suggestion of union for concrete-steel-al.. That model uses model for loads like Gumbel type I, for 50 years for one type of load, reduced to 1 year to be added to other independent loads, the sum set in Gumbel theories to 50 years return period, there are parallel models. A complete reliability system was produced, including non linear effects as from buckling, phenomena considered somehow in actual Construction Eurocodes produced from Model Codes. The system was considered by author in CEB in presence of Hydraulic effects from rivers, floods, sea, in reference with actual practice. When redacting a Road Drainage Norm in MOPU Spain an optimization model was realized by authors giving a way to determine the figure of Return Period, 10 to 50 years, for the cases of hydraulic flows to be considered in road drainage. Satisfactory examples were a stream in SE of Spain with Gumbel Type I model and a paper of Ven Te Chow with Mississippi in Keokuk using Gumbel type II, and the model can be modernized with more varied extreme laws. In fact in the MOPU drainage norm the redacting commission acted also as expert to set a table of return periods for elements of road drainage, in fact as a multi-criteria complex decision system. These precedent ideas were used e.g. in wide Codes, indicated in symposia or meetings, but not published in journals in English, and a condensate of contributions of authors is presented. The authors are somehow involved in optimization for hydraulic and agro planning, and give modest hints of intended applications in presence of agro and environment planning as a selection of the criteria and utility functions involved in bayesian, multi-criteria or mixed decision systems. Modest consideration is made of changing in climate, and on the production and commercial systems, and on others as social and financial.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In order to establish an active internal know-how -reserve~ in an information processing and engineering services . company, a training architecture tailored to the company as an whole must be defined. When a company' s earnings come from . advisory services dynamically structured i.n the form of projects, as is the case at hand, difficulties arise that must be taken into account in the architectural design. The first difficulties are of a psychological nature and the design method proposed here begjns wi th the definition of the highest training metasystem, which is aimed at making adjustments for the variety of perceptions of the company's human components, before the architecture can be designed. This approach may be considered as an application of the cybernetic Law of Requisita Variety (Ashby) and of the Principle of Conceptual Integrity (Brooks) . Also included is a description of sorne of the results of the first steps of metasystems at the level of company organization.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

La gestión del tráfico aéreo (Air Traffic Management, ATM) está experimentando un cambio de paradigma hacia las denominadas operaciones basadas trayectoria. Bajo dicho paradigma se modifica el papel de los controladores de tráfico aéreo desde una operativa basada su intervención táctica continuada hacia una labor de supervisión a más largo plazo. Esto se apoya en la creciente confianza en las soluciones aportadas por las herramientas automatizadas de soporte a la decisión más modernas. Para dar soporte a este concepto, se precisa una importante inversión para el desarrollo, junto con la adquisición de nuevos equipos en tierra y embarcados, que permitan la sincronización precisa de la visión de la trayectoria, basada en el intercambio de información entre ambos actores. Durante los últimos 30 a 40 años las aerolíneas han generado uno de los menores retornos de la inversión de entre todas las industrias. Sin beneficios tangibles, la industria aérea tiene dificultades para atraer el capital requerido para su modernización, lo que retrasa la implantación de dichas mejoras. Esta tesis tiene como objetivo responder a la pregunta de si las capacidades actualmente instaladas en las aeronaves comerciales se pueden aplicar para lograr la sincronización de la trayectoria con el nivel de calidad requerido. Además, se analiza en ella si, conjuntamente con mejoras en las herramientas de predicción trayectorias instaladas en tierra en para facilitar la gestión de las arribadas, dichas capacidades permiten obtener los beneficios esperados en el marco de las operaciones basadas en trayectoria. Esto podría proporcionar un incentivo para futuras actualizaciones de la aviónica que podrían llevar a mejoras adicionales. El concepto operacional propuesto en esta tesis tiene como objetivo permitir que los aviones sean pilotados de una manera consistente con las técnicas actuales de vuelo optimizado. Se permite a las aeronaves que desciendan en el denominado “modo de ángulo de descenso gestionado” (path-managed mode), que es el preferido por la mayoría de las compañías aéreas, debido a que conlleva un reducido consumo de combustible. El problema de este modo es que en él no se controla de forma activa el tiempo de llegada al punto de interés. En nuestro concepto operacional, la incertidumbre temporal se gestiona en mediante de la medición del tiempo en puntos estratégicamente escogidos a lo largo de la trayectoria de la aeronave, y permitiendo la modificación por el control de tierra de la velocidad de la aeronave. Aunque la base del concepto es la gestión de las ordenes de velocidad que se proporcionan al piloto, para ser capaces de operar con los niveles de equipamiento típicos actualmente, dicho concepto también constituye un marco en el que la aviónica más avanzada (por ejemplo, que permita el control por el FMS del tiempo de llegada) puede integrarse de forma natural, una vez que esta tecnología este instalada. Además de gestionar la incertidumbre temporal a través de la medición en múltiples puntos, se intenta reducir dicha incertidumbre al mínimo mediante la mejora de las herramienta de predicción de la trayectoria en tierra. En esta tesis se presenta una novedosa descomposición del proceso de predicción de trayectorias en dos etapas. Dicha descomposición permite integrar adecuadamente los datos de la trayectoria de referencia calculada por el Flight Management System (FMS), disponibles usando Futuro Sistema de Navegación Aérea (FANS), en el sistema de predicción de trayectorias en tierra. FANS es un equipo presente en los aviones comerciales de fuselaje ancho actualmente en la producción, e incluso algunos aviones de fuselaje estrecho pueden tener instalada avionica FANS. Además de informar automáticamente de la posición de la aeronave, FANS permite proporcionar (parte de) la trayectoria de referencia en poder de los FMS, pero la explotación de esta capacidad para la mejora de la predicción de trayectorias no se ha estudiado en profundidad en el pasado. La predicción en dos etapas proporciona una solución adecuada al problema de sincronización de trayectorias aire-tierra dado que permite la sincronización de las dimensiones controladas por el sistema de guiado utilizando la información de la trayectoria de referencia proporcionada mediante FANS, y también facilita la mejora en la predicción de las dimensiones abiertas restantes usado un modelo del guiado que explota los modelos meteorológicos mejorados disponibles en tierra. Este proceso de predicción de la trayectoria de dos etapas se aplicó a una muestra de 438 vuelos reales que realizaron un descenso continuo (sin intervención del controlador) con destino Melbourne. Dichos vuelos son de aeronaves del modelo Boeing 737-800, si bien la metodología descrita es extrapolable a otros tipos de aeronave. El método propuesto de predicción de trayectorias permite una mejora en la desviación estándar del error de la estimación del tiempo de llegada al punto de interés, que es un 30% menor que la que obtiene el FMS. Dicha trayectoria prevista mejorada se puede utilizar para establecer la secuencia de arribadas y para la asignación de las franjas horarias para cada aterrizaje (slots). Sobre la base del slot asignado, se determina un perfil de velocidades que permita cumplir con dicho slot con un impacto mínimo en la eficiencia del vuelo. En la tesis se propone un nuevo algoritmo que determina las velocidades requeridas sin necesidad de un proceso iterativo de búsqueda sobre el sistema de predicción de trayectorias. El algoritmo se basa en una parametrización inteligente del proceso de predicción de la trayectoria, que permite relacionar el tiempo estimado de llegada con una función polinómica. Resolviendo dicho polinomio para el tiempo de llegada deseado, se obtiene de forma natural el perfil de velocidades optimo para cumplir con dicho tiempo de llegada sin comprometer la eficiencia. El diseño de los sistemas de gestión de arribadas propuesto en esta tesis aprovecha la aviónica y los sistemas de comunicación instalados de un modo mucho más eficiente, proporcionando valor añadido para la industria. Por tanto, la solución es compatible con la transición hacia los sistemas de aviónica avanzados que están desarrollándose actualmente. Los beneficios que se obtengan a lo largo de dicha transición son un incentivo para inversiones subsiguientes en la aviónica y en los sistemas de control de tráfico en tierra. ABSTRACT Air traffic management (ATM) is undergoing a paradigm shift towards trajectory based operations where the role of an air traffic controller evolves from that of continuous intervention towards supervision, as decision making is improved based on increased confidence in the solutions provided by advanced automation. To support this concept, significant investment for the development and acquisition of new equipment is required on the ground as well as in the air, to facilitate the high degree of trajectory synchronisation and information exchange required. Over the past 30-40 years the airline industry has generated one of the lowest returns on invested capital among all industries. Without tangible benefits realised, the airline industry may find it difficult to attract the required investment capital and delay acquiring equipment needed to realise the concept of trajectory based operations. In response to these challenges facing the modernisation of ATM, this thesis aims to answer the question whether existing aircraft capabilities can be applied to achieve sufficient trajectory synchronisation and improvements to ground-based trajectory prediction in support of the arrival management process, to realise some of the benefits envisioned under trajectory based operations, and to provide an incentive for further avionics upgrades. The proposed operational concept aims to permit aircraft to operate in a manner consistent with current optimal aircraft operating techniques. It allows aircraft to descend in the fuel efficient path managed mode as preferred by a majority of airlines, with arrival time not actively controlled by the airborne automation. The temporal uncertainty is managed through metering at strategically chosen points along the aircraft’s trajectory with primary use of speed advisories. While the focus is on speed advisories to support all aircraft and different levels of equipage, the concept also constitutes a framework in which advanced avionics as airborne time-of-arrival control can be integrated once this technology is widely available. In addition to managing temporal uncertainty through metering at multiple points, this temporal uncertainty is minimised by improving the supporting trajectory prediction capability. A novel two-stage trajectory prediction process is presented to adequately integrate aircraft trajectory data available through Future Air Navigation Systems (FANS) into the ground-based trajectory predictor. FANS is standard equipment on any wide-body aircraft in production today, and some single-aisle aircraft are easily capable of being fitted with FANS. In addition to automatic position reporting, FANS provides the ability to provide (part of) the reference trajectory held by the aircraft’s Flight Management System (FMS), but this capability has yet been widely overlooked. The two-stage process provides a ‘best of both world’s’ solution to the air-ground synchronisation problem by synchronising with the FMS reference trajectory those dimensions controlled by the guidance mode, and improving on the prediction of the remaining open dimensions by exploiting the high resolution meteorological forecast available to a ground-based system. The two-stage trajectory prediction process was applied to a sample of 438 FANS-equipped Boeing 737-800 flights into Melbourne conducting a continuous descent free from ATC intervention, and can be extrapolated to other types of aircraft. Trajectories predicted through the two-stage approach provided estimated time of arrivals with a 30% reduction in standard deviation of the error compared to estimated time of arrival calculated by the FMS. This improved predicted trajectory can subsequently be used to set the sequence and allocate landing slots. Based on the allocated landing slot, the proposed system calculates a speed schedule for the aircraft to meet this landing slot at minimal flight efficiency impact. A novel algorithm is presented that determines this speed schedule without requiring an iterative process in which multiple calls to a trajectory predictor need to be made. The algorithm is based on parameterisation of the trajectory prediction process, allowing the estimate time of arrival to be represented by a polynomial function of the speed schedule, providing an analytical solution to the speed schedule required to meet a set arrival time. The arrival management solution proposed in this thesis leverages the use of existing avionics and communications systems resulting in new value for industry for current investment. The solution therefore supports a transition concept from mixed equipage towards advanced avionics currently under development. Benefits realised under this transition may provide an incentive for ongoing investment in avionics.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present here an information reconciliation method and demonstrate for the first time that it can achieve efficiencies close to 0.98. This method is based on the belief propagation decoding of non-binary LDPC codes over finite (Galois) fields. In particular, for convenience and faster decoding we only consider power-of-two Galois fields.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Multielectrode recording techniques were used to record ensemble activity from 10 to 16 simultaneously active CA1 and CA3 neurons in the rat hippocampus during performance of a spatial delayed-nonmatch-to-sample task. Extracted sources of variance were used to assess the nature of two different types of errors that accounted for 30% of total trials. The two types of errors included ensemble “miscodes” of sample phase information and errors associated with delay-dependent corruption or disappearance of sample information at the time of the nonmatch response. Statistical assessment of trial sequences and associated “strength” of hippocampal ensemble codes revealed that miscoded error trials always followed delay-dependent error trials in which encoding was “weak,” indicating that the two types of errors were “linked.” It was determined that the occurrence of weakly encoded, delay-dependent error trials initiated an ensemble encoding “strategy” that increased the chances of being correct on the next trial and avoided the occurrence of further delay-dependent errors. Unexpectedly, the strategy involved “strongly” encoding response position information from the prior (delay-dependent) error trial and carrying it forward to the sample phase of the next trial. This produced a miscode type error on trials in which the “carried over” information obliterated encoding of the sample phase response on the next trial. Application of this strategy, irrespective of outcome, was sufficient to reorient the animal to the proper between trial sequence of response contingencies (nonmatch-to-sample) and boost performance to 73% correct on subsequent trials. The capacity for ensemble analyses of strength of information encoding combined with statistical assessment of trial sequences therefore provided unique insight into the “dynamic” nature of the role hippocampus plays in delay type memory tasks.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Purpose: To calculate theoretically the errors in the estimation of corneal power when using the keratometric index (nk) in eyes that underwent laser refractive surgery for the correction of myopia and to define and validate clinically an algorithm for minimizing such errors. Methods: Differences between corneal power estimation by using the classical nk and by using the Gaussian equation in eyes that underwent laser myopic refractive surgery were simulated and evaluated theoretically. Additionally, an adjusted keratometric index (nkadj) model dependent on r1c was developed for minimizing these differences. The model was validated clinically by retrospectively using the data from 32 myopic eyes [range, −1.00 to −6.00 diopters (D)] that had undergone laser in situ keratomileusis using a solid-state laser platform. The agreement between Gaussian (PGaussc) and adjusted keratometric (Pkadj) corneal powers in such eyes was evaluated. Results: It was found that overestimations of corneal power up to 3.5 D were possible for nk = 1.3375 according to our simulations. The nk value to avoid the keratometric error ranged between 1.2984 and 1.3297. The following nkadj models were obtained: nkadj= −0.0064286r1c + 1.37688 (Gullstrand eye model) and nkadj = −0.0063804r1c + 1.37806 (Le Grand). The mean difference between Pkadj and PGaussc was 0.00 D, with limits of agreement of −0.45 and +0.46 D. This difference correlated significantly with the posterior corneal radius (r = −0.94, P < 0.01). Conclusions: The use of a single nk for estimating the corneal power in eyes that underwent a laser myopic refractive surgery can lead to significant errors. These errors can be minimized by using a variable nk dependent on r1c.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

If one has a distribution of words (SLUNs or CLUNS) in a text written in language L(MT), and is adjusted one of the mathematical expressions of distribution that exists in the mathematical literature, some parameter of the elected expression it can be considered as a measure of the diversity. But because the adjustment is not always perfect as usual measure; it is preferable to select an index that doesn't postulate a regularity of distribution expressible for a simple formula. The problem can be approachable statistically, without having special interest for the organization of the text. It can serve as index any monotonous function that has a minimum value when all their elements belong to the same class, that is to say, all the individuals belong to oneself symbol, and a maximum value when each element belongs to a different class, that is to say, each individual is of a different symbol. It should also gather certain conditions like they are: to be not very sensitive to the extension of the text and being invariant to certain number of operations of selection in the text. These operations can be theoretically random. The expressions that offer more advantages are those coming from the theory of the information of Shannon-Weaver. Based on them, the authors develop a theoretical study for indexes of diversity to be applied in texts built in modeling language L(MT), although anything impedes that they can be applied to texts written in natural languages.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Mode of access: Internet.