927 resultados para Time Complexity
Resumo:
Many data streaming applications produces massive amounts of data that must be processed in a distributed fashion due to the resource limitation of a single machine. We propose a distributed data stream clustering protocol. Theoretical analysis shows preliminary results about the quality of discovered clustering. In addition, we present results about the ability to reduce the time complexity respect to the centralized approach.
Resumo:
Graph automorphism (GA) is a classical problem, in which the objective is to compute the automorphism group of an input graph. In this work we propose four novel techniques to speed up algorithms that solve the GA problem by exploring a search tree. They increase the performance of the algorithm by allowing to reduce the depth of the search tree, and by effectively pruning it. We formally prove that a GA algorithm that uses these techniques correctly computes the automorphism group of the input graph. We also describe how the techniques have been incorporated into the GA algorithm conauto, as conauto-2.03, with at most an additive polynomial increase in its asymptotic time complexity. We have experimentally evaluated the impact of each of the above techniques with several graph families. We have observed that each of the techniques by itself significantly reduces the number of processed nodes of the search tree in some subset of graphs, which justifies the use of each of them. Then, when they are applied together, their effect is combined, leading to reductions in the number of processed nodes in most graphs. This is also reflected in a reduction of the running time, which is substantial in some graph families.
Resumo:
En esta memoria estudiamos problemas geométricos relacionados con la Localización de Servicios. La Localización de Servicios trata de la ubicación de uno o más recursos (radares, almacenes, pozos exploradores de petróleo, etc) de manera tal que se optimicen ciertos objetivos (servir al mayor número de usuarios posibles, minimizar el coste de transporte, evitar la contaminación de poblaciones cercanas, etc). La resolución de este tipo de problemas de la vida real da lugar a problemas geométricos muy interesantes. En el planteamiento geométrico de muchos de estos problemas los usuarios potenciales del servicio son representados por puntos mientras que los servicios están representados por la figura geométrica que mejor se adapta al servicio prestado: un anillo para el caso de radares, antenas de radio y televisión, aspersores, etc, una cuña si el servicio que se quiere prestar es de iluminación, por ejemplo, etc. Estas son precisamente las figuras geométricas con las que hemos trabajado. En nuestro caso el servicio será sólo uno y el planteamiento formal del problema es como sigue: dado un anillo o una cuña de tamaño fijo y un conjunto de n puntos en el plano, hallar cuál tiene que ser la posición del mismo para que se cubra la mayor cantidad de puntos. Para resolver estos problemas hemos utilizado arreglos de curvas en el plano. Los arreglos son una estructura geométrica bien conocida y estudiada dentro de la Geometría Computacional. Nosotros nos hemos centrado en los arreglos de curvas de Jordán no acotadas que se intersectan dos a dos en a lo sumo dos puntos, ya que estos fueron los arreglos con los que hemos tenido que tratar para la resolución de los problemas. De entre las diferentes técnicas para la construcción de arreglos hemos estudiado el método incremental, ya que conduce a algoritmos que son en general más sencillos desde el punto de vista de la codificación. Como resultado de este estudio hemos obtenido nuevas cotas que mejoran la complejidad del tiempo de construcción de estos arreglos con algoritmos incrementales. La nueva cota Ο(n λ3(n)) supone una mejora respecto a la cota conocida hasta el momento: Ο(nλ4(n)).También hemos visto que en ciertas condiciones estos arreglos pueden construirse en tiempo Ο(nλ2(n)), que es la cota óptima para la construcción de estos arreglos. Restringiendo el estudio a curvas específicas, hemos obtenido que los arreglos de n circunferencias de k radios diferentes pueden construirse en tiempo Ο(n2 min(log(k),α(n))), resultado válido también para arreglos de elipses, parábolas o hipérbolas de tamaños diferentes cuando las figuras son todas isotéticas.---ABSTRACT--- In this work some geometric problems related with facility location are studied. Facility location deals with location of one or more facilities (radars, stores, oil wells, etc.) in such way that some objective functions are to be optimized (to cover the maximum number of users, to minimize the cost of transportation, to avoid pollution in the nearby cities, etc.). These kind of real world problems give rise to very interesting geometrical problems. In the geometric version of many of these problems, users are represented as points while facilities are represented as different geometric objects depending on the shape of the corresponding facility: an annulus in the case of radars, radio or TV antennas, agricultural spraying devices, etc. A wedge in many illumination or surveillance applications. These two shapes are the geometric figures considered in this Thesis. The formal setting of the problem is the following: Given an annulus or a wedge of fixed size and a set of n points in the plane, locate the best position for the annulus or the wedge so that it covers as many points as possible. Those problems are solved by using arrangements of curves in the plane. Arrangements are a well known geometric structure. Here one deals with arrangements of unbounded Jordan curves which intersect each other in at most two points. Among the different techniques for computing arrangements, incremental method is used because it is easier for implementations. New time complexity upper bounds has been obtained in this Thesis for the construction of such arrangements by means of incremental algorithms. New upper bound is Ο(nλ3(n)) which improves the best known up to now Ο(nλ4(n)). It is shown also that sometimes this arrangements can be constructed in Ο(nλ2(n)), which is the optimal bound for constructing these arrangements. With respect to specific type of curves, one gives an Ο(n2 min(log(k),α(n))), algorithm that constructs the arrangement of a set of n circles of k different radii. This algorithm is also valid for ellipses parabolas or hyperbolas of k different sizes when all of them are isothetic.
Resumo:
La capacidad de transporte es uno de los baremos fundamentales para evaluar la progresión que puede llegar a tener un área económica y social. Es un sector de elevada importancia para la sociedad actual. Englobado en los distintos tipos de transporte, uno de los medios de transporte que se encuentra más en alza en la actualidad, es el ferroviario. Tanto para movilidad de pasajeros como para mercancías, el tren se ha convertido en un medio de transporte muy útil. Se encuentra dentro de las ciudades, entre ciudades con un radio pequeño entre ellas e incluso cada vez más, gracias a la alta velocidad, entre ciudades con gran distancia entre ellas. Esta Tesis pretende ayudar en el diseño de una de las etapas más importantes de los Proyectos de instalación de un sistema ferroviario: el sistema eléctrico de tracción. La fase de diseño de un sistema eléctrico de tracción ferroviaria se enfrenta a muchas dudas que deben ser resueltas con precisión. Del éxito de esta fase dependerá la capacidad de afrontar las demandas de energía de la explotación ferroviaria. También se debe atender a los costes de instalación y de operación, tanto costes directos como indirectos. Con la Metodología que se presenta en esta Tesis se ofrecerá al diseñador la opción de manejar un sistema experto que como soluciones le plantee un conjunto de escenarios de sistemas eléctricos correctos, comprobados por resolución de modelos de ecuaciones. Correctos desde el punto de vista de validez de distintos parámetros eléctrico, como de costes presupuestarios e impacto de costes indirectos. Por tanto, el diseñador al haber hecho uso de esta Metodología, tendría en un espacio de tiempo relativamente corto, un conjunto de soluciones factibles con las que poder elegir cuál convendría más según sus intereses finales. Esta Tesis se ha desarrollado en una vía de investigación integrada dentro del Centro de Investigaciones Ferroviarias CITEF-UPM. Entre otros proyectos y vías de investigación, en CITEF se ha venido trabajando en estudios de validación y dimensionamiento de sistemas eléctricos ferroviarios con diversos y variados clientes y sistemas ferroviarios. A lo largo de los proyectos realizados, el interés siempre ha girado mayoritariamente sobre los siguientes parámetros del sistema eléctrico: - Calcular número y posición de subestaciones de tracción. Potencia de cada subestación. - Tipo de catenaria a lo largo del recorrido. Conductores que componen la catenaria. Características. - Calcular número y posición de autotransformadores para sistemas funcionando en alterna bitensión o 2x25kV. - Posición Zonas Neutras. - Validación según normativa de: o Caídas de tensión en la línea o Tensiones máximas en el retorno de la línea o Sobrecalentamiento de conductores o Sobrecalentamiento de los transformadores de las subestaciones de tracción La idea es que las soluciones aportadas por la Metodología sugieran escenarios donde de estos parámetros estén dentro de los límites que marca la normativa. Tener la posibilidad de tener un repositorio de posibles escenarios donde los parámetros y elementos eléctricos estén calculados como correctos, aporta un avance en tiempos y en pruebas, que mejoraría ostensiblemente el proceso habitual de diseño para los sistemas eléctricos ferroviarios. Los costes directos referidos a elementos como subestaciones de tracción, autotransformadores, zonas neutras, ocupan un gran volumen dentro del presupuesto de un sistema ferroviario. En esta Tesis se ha querido profundizar también en el efecto de los costes indirectos provocados en la instalación y operación de sistemas eléctricos. Aquellos derivados del impacto medioambiental, los costes que se generan al mantener los equipos eléctricos y la instalación de la catenaria, los costes que implican la conexión entre las subestaciones de tracción con la red general o de distribución y por último, los costes de instalación propios de cada elemento compondrían los costes indirectos que, según experiencia, se han pensado relevantes para ejercer un cierto control sobre ellos. La Metodología cubrirá la posibilidad de que los diseños eléctricos propuestos tengan en cuenta variaciones de coste inasumibles o directamente, proponer en igualdad de condiciones de parámetros eléctricos, los más baratos en función de los costes comentados. Analizando los costes directos e indirectos, se ha pensado dividir su impacto entre los que se computan en la instalación y los que suceden posteriormente, durante la operación de la línea ferroviaria. Estos costes normalmente suelen ser contrapuestos, cuánto mejor es uno peor suele ser el otro y viceversa, por lo que hace falta un sistema que trate ambos objetivos por separado. Para conseguir los objetivos comentados, se ha construido la Metodología sobre tres pilares básicos: - Simulador ferroviario Hamlet: Este simulador integra módulos para construir esquemas de vías ferroviarios completos; módulo de simulación mecánica y de la tracción de material rodante; módulo de señalización ferroviaria; módulo de sistema eléctrico. Software realizado en C++ y Matlab. - Análisis y estudio de cómo focalizar los distintos posibles escenarios eléctricos, para que puedan ser examinados rápidamente. Pico de demanda máxima de potencia por el tráfico ferroviario. - Algoritmos de optimización: A partir de un estudio de los posibles algoritmos adaptables a un sistema tan complejo como el que se plantea, se decidió que los algoritmos genéticos serían los elegidos. Se han escogido 3 algoritmos genéticos, permitiendo recabar información acerca del comportamiento y resultados de cada uno de ellos. Los elegidos por motivos de tiempos de respuesta, multiobjetividad, facilidad de adaptación y buena y amplia aplicación en proyectos de ingeniería fueron: NSGA-II, AMGA-II y ɛ-MOEA. - Diseño de funciones y modelo preparado para trabajar con los costes directos e indirectos y las restricciones básicas que los escenarios eléctricos no deberían violar. Estas restricciones vigilan el comportamiento eléctrico y la estabilidad presupuestaria. Las pruebas realizadas utilizando el sistema han tratado o bien de copiar situaciones que se puedan dar en la realidad o directamente sistemas y problemas reales. Esto ha proporcionado además de la posibilidad de validar la Metodología, también se ha posibilitado la comparación entre los algoritmos genéticos, comparar sistemas eléctricos escogidos con los reales y llegar a conclusiones muy satisfactorias. La Metodología sugiere una vía de trabajo muy interesante, tanto por los resultados ya obtenidos como por las oportunidades que puede llegar a crear con la evolución de la misma. Esta Tesis se ha desarrollado con esta idea, por lo que se espera pueda servir como otro factor para trabajar con la validación y diseño de sistemas eléctricos ferroviarios. ABSTRACT Transport capacity is one of the critical points to evaluate the progress than a specific social and economical area is able to reach. This is a sector of high significance for the actual society. Included inside the most common types of transport, one of the means of transport which is elevating its use nowadays is the railway. Such as for passenger transport of weight movements, the train is being consolidated like a very useful mean of transport. Railways are installed in many geography areas. Everyone know train in cities, or connecting cities inside a surrounding area or even more often, taking into account the high-speed, there are railways infrastructure between cities separated with a long distance. This Ph.D work aims to help in the process to design one of the most essential steps in Installation Projects belonging to a railway system: Power Supply System. Design step of the railway power supply, usually confronts to several doubts and uncertainties, which must be solved with high accuracy. Capacity to supply power to the railway traffic depends on the success of this step. On the other hand is very important to manage the direct and indirect costs derived from Installation and Operation. With the Methodology is presented in this Thesis, it will be offered to the designer the possibility to handle an expert system that finally will fill a set of possible solutions. These solutions must be ready to work properly in the railway system, and they were tested using complex equation models. This Thesis has been developed through a research way, integrated inside Citef (Railway Research Centre of Technical University of Madrid). Among other projects and research ways, in Citef has been working in several validation studies and dimensioning of railway power supplies. It is been working by a large range of clients and railways systems. Along the accomplished Projects, the main goal has been rounded mostly about the next list of parameters of the electrical system: - Calculating number and location of traction substations. Power of each substation. - Type of Overhead contact line or catenary through the railway line. The wires which set up the catenary. Main Characteristics. - Calculating number and position of autotransformers for systems working in alternating current bi-voltage of called 2x25 kV. - Location of Neutral Zones. - Validating upon regulation of: o Drop voltages along the line o Maximum return voltages in the line o Overheating/overcurrent of the wires of the catenary o Avoiding overheating in the transformers of the traction substations. Main objective is that the solutions given by the Methodology, could be suggest scenarios where all of these parameters from above, would be between the limits established in the regulation. Having the choice to achieve a repository of possible good scenarios, where the parameters and electrical elements will be assigned like ready to work, that gives a great advance in terms of times and avoiding several tests. All of this would improve evidently the regular railway electrical systems process design. Direct costs referred to elements like traction substations, autotransformers, neutral zones, usually take up a great volume inside the general budget in railway systems. In this Thesis has been thought to bear in mind another kind of costs related to railway systems, also called indirect costs. These could be enveloped by those enmarked during installation and operation of electrical systems. Those derived from environmental impact; costs generated during the maintenance of the electrical elements and catenary; costs involved in the connection between traction substations and general electric grid; finally costs linked with the own installation of the whole electrical elements needed for the correct performance of the railway system. These are integrated inside the set has been collected taking into account own experience and research works. They are relevant to be controlled for our Methodology, just in case for the designers of this type of systems. The Methodology will cover the possibility that the final proposed power supply systems will be hold non-acceptable variations of costs, comparing with initial expected budgets, or directly assuming a threshold of budget for electrical elements in actual scenario, and achieving the cheapest in terms of commented costs from above. Analyzing direct and indirect costs, has been thought to divide their impact between two main categories. First one will be inside the Installation and the other category will comply with the costs often happens during Railway Operation time. These costs normally are opposed, that means when one is better the other turn into worse, in costs meaning. For this reason is necessary treating both objectives separately, in order to evaluate correctly the impact of each one into the final system. The objectives detailed before build the Methodology under three basic pillars: - Railway simulator Hamlet: This software has modules to configure many railway type of lines; mechanical and traction module to simulate the movement of rolling stock; signaling module; power supply module. This software has been developed using C++ and Matlab R13a - Previously has been mandatory to study how would be possible to work properly with a great number of feasible electrical systems. The target comprised the quick examination of these set of scenarios in terms of time. This point is talking about Maximum power demand peaks by railway operation plans. - Optimization algorithms. A railway infrastructure is a very complex system. At the beginning it was necessary to search about techniques and optimization algorithms, which could be adaptable to this complex system. Finally three genetic multiobjective algorithms were the chosen. Final decision was taken attending to reasons such as time complexity, able to multiobjective, easy to integrate in our problem and with a large application in engineering tasks. They are: NSGA-II, AMGA-II and ɛ-MOEA. - Designing objectives functions and equation model ready to work with the direct and indirect costs. The basic restrictions are not able to avoid, like budgetary or electrical, connected hardly with the recommended performance of elements, catenary and safety in a electrical railway systems. The battery of tests launched to the Methodology has been designed to be as real as possible. In fact, due to our work in Citef and with real Projects, has been integrated and configured three real railway lines, in order to evaluate correctly the final results collected by the Methodology. Another topic of our tests has been the comparison between the performances of the three algorithms chosen. Final step has been the comparison again with different possible good solutions, it means power supply system designs, provided by the Methodology, testing the validity of them. Once this work has been finished, the conclusions have been very satisfactory. Therefore this Thesis suggest a very interesting way of research and work, in terms of the results obtained and for the future opportunities can be created with the evolution of this. This Thesis has been developed with this idea in mind, so is expected this work could adhere another factor to work in the difficult task of validation and design of railway power supply systems.
Resumo:
Efficient hardware implementations of arithmetic operations in the Galois field are highly desirable for several applications, such as coding theory, computer algebra and cryptography. Among these operations, multiplication is of special interest because it is considered the most important building block. Therefore, high-speed algorithms and hardware architectures for computing multiplication are highly required. In this paper, bit-parallel polynomial basis multipliers over the binary field GF(2(m)) generated using type II irreducible pentanomials are considered. The multiplier here presented has the lowest time complexity known to date for similar multipliers based on this type of irreducible pentanomials.
Resumo:
Machine learning techniques have been recognized as powerful tools for learning from data. One of the most popular learning techniques, the Back-Propagation (BP) Artificial Neural Networks, can be used as a computer model to predict peptides binding to the Human Leukocyte Antigens (HLA). The major advantage of computational screening is that it reduces the number of wet-lab experiments that need to be performed, significantly reducing the cost and time. A recently developed method, Extreme Learning Machine (ELM), which has superior properties over BP has been investigated to accomplish such tasks. In our work, we found that the ELM is as good as, if not better than, the BP in term of time complexity, accuracy deviations across experiments, and most importantly - prevention from over-fitting for prediction of peptide binding to HLA.
Resumo:
With rapid advances in video processing technologies and ever fast increments in network bandwidth, the popularity of video content publishing and sharing has made similarity search an indispensable operation to retrieve videos of user interests. The video similarity is usually measured by the percentage of similar frames shared by two video sequences, and each frame is typically represented as a high-dimensional feature vector. Unfortunately, high complexity of video content has posed the following major challenges for fast retrieval: (a) effective and compact video representations, (b) efficient similarity measurements, and (c) efficient indexing on the compact representations. In this paper, we propose a number of methods to achieve fast similarity search for very large video database. First, each video sequence is summarized into a small number of clusters, each of which contains similar frames and is represented by a novel compact model called Video Triplet (ViTri). ViTri models a cluster as a tightly bounded hypersphere described by its position, radius, and density. The ViTri similarity is measured by the volume of intersection between two hyperspheres multiplying the minimal density, i.e., the estimated number of similar frames shared by two clusters. The total number of similar frames is then estimated to derive the overall similarity between two video sequences. Hence the time complexity of video similarity measure can be reduced greatly. To further reduce the number of similarity computations on ViTris, we introduce a new one dimensional transformation technique which rotates and shifts the original axis system using PCA in such a way that the original inter-distance between two high-dimensional vectors can be maximally retained after mapping. An efficient B+-tree is then built on the transformed one dimensional values of ViTris' positions. Such a transformation enables B+-tree to achieve its optimal performance by quickly filtering a large portion of non-similar ViTris. Our extensive experiments on real large video datasets prove the effectiveness of our proposals that outperform existing methods significantly.
Resumo:
Proofs by induction are central to many computer science areas such as data structures, theory of computation, programming languages, program efficiency-time complexity, and program correctness. Proofs by induction can also improve students’ understanding and performance of computer science concepts such as programming languages, algorithm design, and recursion, as well as serve as a medium for teaching them. Even though students are exposed to proofs by induction in many courses of their curricula, they still have difficulties understanding and performing them. This impacts the whole course of their studies, since proofs by induction are omnipresent in computer science. Specifically, students do not gain conceptual understanding of induction early in the curriculum and as a result, they have difficulties applying it to more advanced areas later on in their studies. The goal of my dissertation is twofold: (1) identifying sources of computer science students’ difficulties with proofs by induction, and (2) developing a new approach to teaching proofs by induction by way of an interactive and multimodal electronic book (e-book). For the first goal, I undertook a study to identify possible sources of computer science students’ difficulties with proofs by induction. Its results suggest that there is a close correlation between students’ understanding of inductive definitions and their understanding and performance of proofs by induction. For designing and developing my e-book, I took into consideration the results of my study, as well as the drawbacks of the current methodologies of teaching proofs by induction for computer science. I designed my e-book to be used as a standalone and complete educational environment. I also conducted a study on the effectiveness of my e-book in the classroom. The results of my study suggest that, unlike the current methodologies of teaching proofs by induction for computer science, my e-book helped students overcome many of their difficulties and gain conceptual understanding of proofs induction.
Resumo:
Proofs by induction are central to many computer science areas such as data structures, theory of computation, programming languages, program efficiency-time complexity, and program correctness. Proofs by induction can also improve students’ understanding of and performance with computer science concepts such as programming languages, algorithm design, and recursion, as well as serve as a medium for teaching them. Even though students are exposed to proofs by induction in many courses of their curricula, they still have difficulties understanding and performing them. This impacts the whole course of their studies, since proofs by induction are omnipresent in computer science. Specifically, students do not gain conceptual understanding of induction early in the curriculum and as a result, they have difficulties applying it to more advanced areas later on in their studies. The goal of my dissertation is twofold: 1. identifying sources of computer science students’ difficulties with proofs by induction, and 2. developing a new approach to teaching proofs by induction by way of an interactive and multimodal electronic book (e-book). For the first goal, I undertook a study to identify possible sources of computer science students’ difficulties with proofs by induction. Its results suggest that there is a close correlation between students’ understanding of inductive definitions and their understanding and performance of proofs by induction. For designing and developing my e-book, I took into consideration the results of my study, as well as the drawbacks of the current methodologies of teaching proofs by induction for computer science. I designed my e-book to be used as a standalone and complete educational environment. I also conducted a study on the effectiveness of my e-book in the classroom. The results of my study suggest that, unlike the current methodologies of teaching proofs by induction for computer science, my e-book helped students overcome many of their difficulties and gain conceptual understanding of proofs induction.
Resumo:
There has been an increasing interest in the development of new methods using Pareto optimality to deal with multi-objective criteria (for example, accuracy and time complexity). Once one has developed an approach to a problem of interest, the problem is then how to compare it with the state of art. In machine learning, algorithms are typically evaluated by comparing their performance on different data sets by means of statistical tests. Standard tests used for this purpose are able to consider jointly neither performance measures nor multiple competitors at once. The aim of this paper is to resolve these issues by developing statistical procedures that are able to account for multiple competing measures at the same time and to compare multiple algorithms altogether. In particular, we develop two tests: a frequentist procedure based on the generalized likelihood-ratio test and a Bayesian procedure based on a multinomial-Dirichlet conjugate model. We further extend them by discovering conditional independences among measures to reduce the number of parameters of such models, as usually the number of studied cases is very reduced in such comparisons. Data from a comparison among general purpose classifiers is used to show a practical application of our tests.
Resumo:
This project examines the current available work on the explicit and implicit parallelization of the R scripting language and reports on experimental findings for the development of a model for predicting effective points for automatic parallelization to be performed, based upon input data sizes and function complexity. After finding or creating a series of custom benchmarks, an interval based on data size and time complexity where replacement becomes a viable option was found; specifically between O(N) and O(N3) exclusive. As data size increases, the benefits of parallel processing become more apparent and a point is reached where those benefits outweigh the costs in memory transfer time. Based on our observations, this point can be predicted with a fair amount of accuracy using regression on a sample of approximately ten data sizes spread evenly between a system determined minimum and maximum size.
Resumo:
In this paper, we present a low-complexity algorithm for detection in high-rate, non-orthogonal space-time block coded (STBC) large-multiple-input multiple-output (MIMO) systems that achieve high spectral efficiencies of the order of tens of bps/Hz. We also present a training-based iterative detection/channel estimation scheme for such large STBC MIMO systems. Our simulation results show that excellent bit error rate and nearness-to-capacity performance are achieved by the proposed multistage likelihood ascent search (M-LAS) detector in conjunction with the proposed iterative detection/channel estimation scheme at low complexities. The fact that we could show such good results for large STBCs like 16 X 16 and 32 X 32 STBCs from Cyclic Division Algebras (CDA) operating at spectral efficiencies in excess of 20 bps/Hz (even after accounting for the overheads meant for pilot based training for channel estimation and turbo coding) establishes the effectiveness of the proposed detector and channel estimator. We decode perfect codes of large dimensions using the proposed detector. With the feasibility of such a low-complexity detection/channel estimation scheme, large-MIMO systems with tens of antennas operating at several tens of bps/Hz spectral efficiencies can become practical, enabling interesting high data rate wireless applications.
Resumo:
"Extended Clifford algebras" are introduced as a means to obtain low ML decoding complexity space-time block codes. Using left regular matrix representations of two specific classes of extended Clifford algebras, two systematic algebraic constructions of full diversity Distributed Space-Time Codes (DSTCs) are provided for any power of two number of relays. The left regular matrix representation has been shown to naturally result in space-time codes meeting the additional constraints required for DSTCs. The DSTCs so constructed have the salient feature of reduced Maximum Likelihood (ML) decoding complexity. In particular, the ML decoding of these codes can be performed by applying the lattice decoder algorithm on a lattice of four times lesser dimension than what is required in general. Moreover these codes have a uniform distribution of power among the relays and in time, thus leading to a low Peak to Average Power Ratio at the relays.
Resumo:
The differential encoding/decoding setup introduced by Kiran et at, Oggier et al and Jing et al for wireless relay networks that use codebooks consisting of unitary matrices is extended to allow codebooks consisting of scaled unitary matrices. For such codebooks to be used in the Jing-Hassibi protocol for cooperative diversity, the conditions that need to be satisfied by the relay matrices and the codebook are identified. A class of previously known rate one, full diversity, four-group encodable and four-group decodable Differential Space-Time Codes (DSTCs) is proposed for use as Distributed DSTCs (DDSTCs) in the proposed set up. To the best of our knowledge, this is the first known low decoding complexity DDSTC scheme for cooperative wireless networks.
Resumo:
We consider a scenario in which a wireless sensor network is formed by randomly deploying n sensors to measure some spatial function over a field, with the objective of computing a function of the measurements and communicating it to an operator station. We restrict ourselves to the class of type-threshold functions (as defined in the work of Giridhar and Kumar, 2005), of which max, min, and indicator functions are important examples: our discussions are couched in terms of the max function. We view the problem as one of message-passing distributed computation over a geometric random graph. The network is assumed to be synchronous, and the sensors synchronously measure values and then collaborate to compute and deliver the function computed with these values to the operator station. Computation algorithms differ in (1) the communication topology assumed and (2) the messages that the nodes need to exchange in order to carry out the computation. The focus of our paper is to establish (in probability) scaling laws for the time and energy complexity of the distributed function computation over random wireless networks, under the assumption of centralized contention-free scheduling of packet transmissions. First, without any constraint on the computation algorithm, we establish scaling laws for the computation time and energy expenditure for one-time maximum computation. We show that for an optimal algorithm, the computation time and energy expenditure scale, respectively, as Theta(radicn/log n) and Theta(n) asymptotically as the number of sensors n rarr infin. Second, we analyze the performance of three specific computation algorithms that may be used in specific practical situations, namely, the tree algorithm, multihop transmission, and the Ripple algorithm (a type of gossip algorithm), and obtain scaling laws for the computation time and energy expenditure as n rarr infin. In particular, we show that the computation time for these algorithms scales as Theta(radicn/lo- g n), Theta(n), and Theta(radicn log n), respectively, whereas the energy expended scales as , Theta(n), Theta(radicn/log n), and Theta(radicn log n), respectively. Finally, simulation results are provided to show that our analysis indeed captures the correct scaling. The simulations also yield estimates of the constant multipliers in the scaling laws. Our analyses throughout assume a centralized optimal scheduler, and hence, our results can be viewed as providing bounds for the performance with practical distributed schedulers.