920 resultados para non-polluting systems
Resumo:
This paper sheds new light on the determination of environmental policies in majoritarian federal electoral systems such as the U.S., and derives implications for the environmental federalism debate on whether the national or local government should have authority over environmental policies. In majoritarian systems, where the legislature consists of geographically distinct electoral districts, the majority party (at either the national or the state level) favors its own home districts; depending on the location of polluting industries and the associated pollution damages, the majority party may therefore impose sub-optimally high or low pollution taxes due to a majority bias. We show that majority bias can influence the social-welfare ranking of alternative government policies and, in some cases, may actually bring distortionary policies closer to the first-best solution.
Resumo:
Evaporative cooling systems continue to be associated with outbreaks of Legionnaires’ disease despite widely available maintenance guidelines intended to reduce these outbreaks. Yet, the guidelines vary widely regarding the recommendations that are made to maintain evaporative cooling systems and it is unclear whether guidelines were in place or, if they were, whether they were being followed when the outbreaks of Legionnaires’ disease occurred. Thus, this study was designed to conduct two systematic reviews of (1) evaporative cooling system maintenance guidelines; and (2) published Legionnaires’ disease outbreaks. For each maintenance guideline identified in the systematic review, recommended maintenance practices were abstracted and similarities and/or differences in the reported recommendations were assessed. Following the systematic review of outbreak investigations that meet the inclusion criteria established for the study, information about the state of the evaporative cooling system during the outbreak investigation was abstracted to summarize, when reported, which maintenance practices were implemented. As expected, the recommended maintenance procedures varied greatly across the guidelines and were not always specific. Overall, the outbreak investigations tended to report similar maintenance issues that were unclear in the maintenance guidelines. Generally, these maintenance issues were biocide use, microbiological testing, frequency of general inspections, and protocols and frequency of total system cleanings. The role in which non-standardized and generalized maintenance guidelines plays in the continued association between Legionnaires’ disease and evaporative cooling systems is still not fully understood. However, this study suggests that more specific and standardized maintenance guidelines, that have been scientifically established to be effective in controlling Legionella bacteria, are needed and then these guidelines must be properly implemented in order to help reduce further Legionnaires’ disease outbreaks associated with evaporative cooling systems.^
Resumo:
This study analyzed the relationship of family support systems and adolescent pregnancy outcomes. The population for the study was 390 adolescents who had attended the Marion County Health Department Adolescent Family Life Project in Indianapolis, Indiana during a two-year period.^ The study is unique in that it afforded the opportunity to compare adolescent pregnancy-related characteristics, of white and non-white adolescents in the same study.^ The pregnancy outcomes studied were: Infant birthweight, school attendance, and pregnancy recidivism.^ Significant results were found in the analysis that supported other research in regard to factors that are associated with school attendance when family support, adolescent's age, and ethnicity were controlled. Infant birthweight and repeat pregnancy outcome relationships were not found to have any consistently significant relationship with independent variables anticipated to be associated. However, the comparisons of infant birthweight among the adolescents with, and without, family support, by ethnicity resulted in some interesting findings. Repeat pregnancy proved an enigma, in that there seemed to be almost no variables in this study that were associated with the adolescent having a repeat pregnancy.^ Familial support in this study seemed to be of less importance as a factor in adolescent pregnancy outcomes than was ethnicity. The non-white adolescents in this study had a better record for remaining in school, both those non-white adolescents who lived with parents, and those who did not live with parents. More low birthweight occurred in the non-white adolescent, both those adolescents who lived with parents, and those who did not live with parents. Repeat pregnancy occurred more in the non-white adolescent whether she lived with parents, or did not live with parents. ^
Resumo:
Background. Preterm birth is major public health problem. Preterm infants face a post-natal environment that their under developed systems are inapt to manage. Developmentally supportive individualized care has demonstrated positive outcomes in minimizing resulting negative effects. Non-nutritive sucking (NNS) interventions are thought to promote the development of the suck-swallow-breathe mechanism and a calming tool. It is hypothesized that growth and development is maintained by strengthened sucking skills and stable behavioral states.^ Objective. To determine the importance of non-nutritive sucking (NNS) on outcomes that are clinically relevant to the preterm infant population.^ Methods. A computerized search of MEDLINE and PUBMED databases during the period of 1975 and May 2011 was conducted. Relevant articles were selected using published criteria for detecting clinically validated studies. The search yielded 10 randomized controlled studies relative to the outcomes of interest: weight gain, time to full feeds, time to discharge from hospital, and pain response.^ Results. NNS was found to decrease significantly the length of hospitalization in preterm infants. Although positive results were reported in some of the studies, the results did not show a consistent benefit of NNS with respect to other major clinical variables. NNS was shown to reduce distress following painful stimuli.^ Conclusion. Although NNS shows promise for the development of preterm infants, there is lack of agreement concerning some of the outcomes of interest. Evidence does support NNS's positive contribution to early hospital discharge and pain relief. Future research should focus on long-term, comparable outcomes. ^
Resumo:
To reach the goals established by the Institute of Medicine (IOM) and the Centers for Disease Control's (CDC) STOP TB USA, measures must be taken to curtail a future peak in Tuberculosis (TB) incidence and speed the currently stagnant rate of TB elimination. Both efforts will require, at minimum, the consideration and understanding of the third dimension of TB transmission: the location-based spread of an airborne pathogen among persons known and unknown to each other. This consideration will require an elucidation of the areas within the U.S. that have endemic TB. The Houston Tuberculosis Initiative (HTI) was a population-based active surveillance of confirmed Houston/Harris County TB cases from 1995–2004. Strengths in this dataset include the molecular characterization of laboratory confirmed cases, the collection of geographic locations (including home addresses) frequented by cases, and the HTI time period that parallels a decline in TB incidence in the United States (U.S.). The HTI dataset was used in this secondary data analysis to implement a GIS analysis of TB cases, the locations frequented by cases, and their association with risk factors associated with TB transmission. ^ This study reports, for the first time, the incidence of TB among the homeless in Houston, Texas. The homeless are an at-risk population for TB disease, yet they are also a population whose TB incidence has been unknown and unreported due to their non-enumeration. The first section of this dissertation identifies local areas in Houston with endemic TB disease. Many Houston TB cases who reported living in these endemic areas also share the TB risk factor of current or recent homelessness. Merging the 2004–2005 Houston enumeration of the homeless with historical HTI surveillance data of TB cases in Houston enabled this first-time report of TB risk among the homeless in Houston. The homeless were more likely to be US-born, belong to a genotypic cluster, and belong to a cluster of a larger size. The calculated average incidence among homeless persons was 411/100,000, compared to 9.5/100,000 among housed. These alarming rates are not driven by a co-infection but by social determinants. The unsheltered persons were hospitalized more days and required more follow-up time by staff than those who reported a steady housing situation. The homeless are a specific example of the increased targeting of prevention dollars that could occur if TB rates were reported for specific areas with known health disparities rather than as a generalized rate normalized over a diverse population. ^ It has been estimated that 27% of Houstonians use public transportation. The city layout allows bus routes to run like veins connecting even the most diverse of populations within the metropolitan area. Secondary data analysis of frequent bus use (defined as riding a route weekly) among TB cases was assessed for its relationship with known TB risk factors. The spatial distribution of genotypic clusters associated with bus use was assessed, along with the reported routes and epidemiologic-links among cases belonging to the identified clusters. ^ TB cases who reported frequent bus use were more likely to have demographic and social risk factors associated with poverty, immune suppression and health disparities. An equal proportion of bus riders and non-bus riders were cultured for Mycobacterium tuberculosis, yet 75% of bus riders were genotypically clustered, indicating recent transmission, compared to 56% of non-bus riders (OR=2.4, 95%CI(2.0, 2.8), p<0.001). Bus riders had a mean cluster size of 50.14 vs. 28.9 (p<0.001). Second order spatial analysis of clustered fingerprint 2 (n=122), a Beijing family cluster, revealed geographic clustering among cases based on their report of bus use. Univariate and multivariate analysis of routes reported by cases belonging to these clusters found that 10 of the 14 clusters were associated with use. Individual Metro routes, including one route servicing the local hospitals, were found to be risk factors for belonging to a cluster shown to be endemic in Houston. The routes themselves geographically connect the census tracts previously identified as having endemic TB. 78% (15/23) of Houston Metro routes investigated had one or more print groups reporting frequent use for every HTI study year. We present data on three specific but clonally related print groups and show that bus-use is clustered in time by route and is the only known link between cases in one of the three prints: print 22. (Abstract shortened by UMI.)^
Resumo:
Mechanisms that allow pathogens to colonize the host are not the product of isolated genes, but instead emerge from the concerted operation of regulatory networks. Therefore, identifying components and the systemic behavior of networks is necessary to a better understanding of gene regulation and pathogenesis. To this end, I have developed systems biology approaches to study transcriptional and post-transcriptional gene regulation in bacteria, with an emphasis in the human pathogen Mycobacterium tuberculosis (Mtb). First, I developed a network response method to identify parts of the Mtb global transcriptional regulatory network utilized by the pathogen to counteract phagosomal stresses and survive within resting macrophages. As a result, the method unveiled transcriptional regulators and associated regulons utilized by Mtb to establish a successful infection of macrophages throughout the first 14 days of infection. Additionally, this network-based analysis identified the production of Fe-S proteins coupled to lipid metabolism through the alkane hydroxylase complex as a possible strategy employed by Mtb to survive in the host. Second, I developed a network inference method to infer the small non-coding RNA (sRNA) regulatory network in Mtb. The method identifies sRNA-mRNA interactions by integrating a priori knowledge of possible binding sites with structure-driven identification of binding sites. The reconstructed network was useful to predict functional roles for the multitude of sRNAs recently discovered in the pathogen, being that several sRNAs were postulated to be involved in virulence-related processes. Finally, I applied a combined experimental and computational approach to study post-transcriptional repression mediated by small non-coding RNAs in bacteria. Specifically, a probabilistic ranking methodology termed rank-conciliation was developed to infer sRNA-mRNA interactions based on multiple types of data. The method was shown to improve target prediction in Escherichia coli, and therefore is useful to prioritize candidate targets for experimental validation.
Resumo:
Feline immunodeficiency virus (FIV)-based gene transfer systems are being seriously considered for human gene therapy as an alternative to vectors based on primate lentiviruses, a genetically complex group of retroviruses capable of infecting non-dividing cells. The greater phylogenetic distance between the feline and primate lentiviruses is thought to reduce chances of the generation of recombinant viruses. However, safety of FIV-based vector systems has not been tested experimentally. Since primate lentiviruses such as human and simian immunodeficiency viruses (HIV/SIV) can cross-package each other's genomes, we tested this trait with respect to FIV. Unexpectedly, both feline and primate lentiviruses were reciprocally able to both cross-package and propagate each other's RNA genomes. This was largely due to the recognition of viral packaging signals by the heterologous proteins. However, a simple retrovirus such as Mason-Pfizer monkey virus (MPMV) was unable to package FIV RNA. Interestingly, FIV could package MPMV RNA, but not propagate it for further steps of replication. These findings suggest that upon co-infection of the same host, cross-packaging may allow distinct retroviruses to generate chimeric variants with unknown pathogenic potential. ^ In order to understand the packaging determinants in FIV, we conducted a detailed mutational analysis of the region thought to contain FIV packaging signal. We show that the first 90–120 nt of the 5′ untranslated region (UTR) and the first 90 nt of gag were simultaneously required for efficient FIV RNA packaging. These results suggest that the primary FIV packaging signal is multipartite and discontinuous, composed of two core elements separated by 150 nt of the 5 ′UTR. ^ The above studies are being used towards the development of safer FIV-based self-inactivating (SIN) vectors. These vectors are being designed to eliminate the ability of FIV transfer vector RNAs to be mobilized by primate lentiviral proteins that may be present in the target cells. Preliminary test of the first generation of these vectors has revealed that they are incapable of being propagated by feline proteins. The inability of FIV transfer vectors to express packageable vector RNA after integration should greatly increase the safety of FIV vectors for human gene therapy. ^
Resumo:
This PhD thesis focused on the analysis and application of microbial membrane lipids as biomarkers in marine sediments. Existing protocols for lipid extraction from marine sediments and biomass were modified. In addition, recent protocols based on high-performance liquid chromatography coupled to mass spectrometry (HPLC-MS) as well as state-of-the-art mass spectrometric analysis by quadrupole time-of-flight (MSqTOF) and the triple quadrupole (MSQQQ) mass spectrometer were used to investigate matrix effects and evaluate the reliability of quantitative analysis in marine environmental samples. The improved lipid extraction and quantification were used to analyze Black Sea water column and sediments samples to a depth of 8 meters from site GeoB 15105 taken during cruise M84/1 (DARCSEAS) with R/V Meteor to apply lipid analysis in benthic bio systems. With this component specific differentiation between planktonic and benthic lipid signature we assessed possible lipid sources. Here, this high detail lipid fingerprinting allowed us to observe changes in the head group and lipid core structures of the intact polar lipids according to the geochemical zonation.
Resumo:
Based on the consolidated statements data of the universal/commercial banks (UKbank) and non-bank financial institutions with quasi-banking licenses, this paper presents a keen necessity of obtaining data in detail on both sides (assets and liabilities) of their financial conditions and further analyses. Those would bring more adequate assessments on the Philippine financial system, especially with regard to each financial subsector's financing/lending preferences and behavior. The paper also presents a possibility that the skewed locational and operational distribution exists in the non-UKbank financial subsectors. It suggests there may be a significant deviation from the authorities' (the BSP, SEC and others) intended/anticipated financial system in the banking/non-bank financial institutions' real operations.
Resumo:
Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or significant financial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Java’s popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specification for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) profile was developed to define a robust subset of the language that is amenable to static analysis for high-integrity system certification. Currently, a specification under the Java community process (JSR- 302) is being developed. Its purpose is to define those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its profiles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to define appropriate abstractions to overcome this problem. Currently there is no formal specification. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ profile. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the definition of a computational model which identifies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by defining two phases and a specific threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol defining a network interface and modules. The JRMP protocol was modified to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez más importantes para la sociedad. Su demanda aumenta y cada vez más dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar pérdida de vidas humanas, daños en el medio ambiente o cuantiosas pérdidas económicas. La necesidad de satisfacer requisitos temporales estrictos, hace más complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso técnicas adecuadas en su diseño, mantenimiento y certificación. En concreto, se requiere una tecnología flexible e independiente del hardware. La evolución de las redes y paradigmas de comunicación, así como la necesidad de mayor potencia de cómputo y de tolerancia a fallos, ha motivado la interconexión de dispositivos electrónicos. Los mecanismos de comunicación permiten la transferencia de datos con alta velocidad de transmisión. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactúan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecución. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseño. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar ˜ donde se ejecutan puede variar. El lenguaje de programación Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specification for Java), que es una extensión del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitación importante dado que la mayoría de los actuales y futuros sistemas serán distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el fin de definir las abstracciones que aborden dicha limitación, pero en la actualidad aun no existe una especificacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integración entre el modelo de RMI (Remote Method Invocation) y el perfil HRTJ. Ha sido diseñado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la confiabilidad del comportamiento temporal y el uso de recursos. El diseño parte de la definición de un modelo computacional el cual identifica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes más adecuados, el modelo de análisis, y un subconjunto de Java para sistemas de tiempo real crítico. En el diseño, las referencias remotas son el medio básico para construcción de aplicaciones distribuidas las cuales son asociadas a todos los parámetros no funcionales y los recursos necesarios para la ejecución de invocaciones remotas síncronas o asíncronas con atributos de tiempo real. El middleware propuesto separa la asignación de recursos de la propia ejecución definiendo dos fases y un mecanismo de hebras especifico que garantiza un comportamiento temporal adecuado. Además se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red definiendo una interfaz de red y módulos específicos. También se ha modificado el protocolo JRMP para incluir diferentes fases, parámetros no funcionales y optimizaciones de los tamaños de los mensajes. Aunque la serialización es una de las operaciones fundamentales para asegurar la adecuada transmisión de datos, las actuales implementaciones no son adecuadas para sistemas críticos y no hay alternativas. Este trabajo propone una serialización predecible que ha implicado el desarrollo de un nuevo compilador para la generación de código optimizado acorde al modelo computacional. La solución propuesta tiene la ventaja que en tiempo de compilación nos permite planificar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseño e implementación se ha llevado a cabo un exigente proceso de validación con énfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicación industrial desarrollada por Thales Avionics (un sistema de gestión de vuelo) y en las pruebas exhaustivas han demostrado que el diseño y el prototipo son fiables para aplicaciones industriales con estrictos requisitos temporales.
Resumo:
Runtime management of distributed information systems is a complex and costly activity. One of the main challenges that must be addressed is obtaining a complete and updated view of all the managed runtime resources. This article presents a monitoring architecture for heterogeneous and distributed information systems. It is composed of two elements: an information model and an agent infrastructure. The model negates the complexity and variability of these systems and enables the abstraction over non-relevant details. The infrastructure uses this information model to monitor and manage the modeled environment, performing and detecting changes in execution time. The agents infrastructure is further detailed and its components and the relationships between them are explained. Moreover, the proposal is validated through a set of agents that instrument the JEE Glassfish application server, paying special attention to support distributed configuration scenarios.
Resumo:
In this paper we generalize the Continuous Adversarial Queuing Theory (CAQT) model (Blesa et al. in MFCS, Lecture Notes in Computer Science, vol. 3618, pp. 144–155, 2005) by considering the possibility that the router clocks in the network are not synchronized. We name the new model Non Synchronized CAQT (NSCAQT). Clearly, this new extension to the model only affects those scheduling policies that use some form of timing. In a first approach we consider the case in which although not synchronized, all clocks run at the same speed, maintaining constant differences. In this case we show that all universally stable policies in CAQT that use the injection time and the remaining path to schedule packets remain universally stable. These policies include, for instance, Shortest in System (SIS) and Longest in System (LIS). Then, we study the case in which clock differences can vary over time, but the maximum difference is bounded. In this model we show the universal stability of two families of policies related to SIS and LIS respectively (the priority of a packet in these policies depends on the arrival time and a function of the path traversed). The bounds we obtain in this case depend on the maximum difference between clocks. This is a necessary requirement, since we also show that LIS is not universally stable in systems without bounded clock difference. We then present a new policy that we call Longest in Queues (LIQ), which gives priority to the packet that has been waiting the longest in edge queues. This policy is universally stable and, if clocks maintain constant differences, the bounds we prove do not depend on them. To finish, we provide with simulation results that compare the behavior of some of these policies in a network with stochastic injection of packets.
Resumo:
Membrane systems are computational equivalent to Turing machines. However, their distributed and massively parallel nature obtains polynomial solutions opposite to traditional non-polynomial ones. At this point, it is very important to develop dedicated hardware and software implementations exploiting those two membrane systems features. Dealing with distributed implementations of P systems, the bottleneck communication problem has arisen. When the number of membranes grows up, the network gets congested. The purpose of distributed architectures is to reach a compromise between the massively parallel character of the system and the needed evolution step time to transit from one configuration of the system to the next one, solving the bottleneck communication problem. The goal of this paper is twofold. Firstly, to survey in a systematic and uniform way the main results regarding the way membranes can be placed on processors in order to get a software/hardware simulation of P-Systems in a distributed environment. Secondly, we improve some results about the membrane dissolution problem, prove that it is connected, and discuss the possibility of simulating this property in the distributed model. All this yields an improvement in the system parallelism implementation since it gets an increment of the parallelism of the external communication among processors. Proposed ideas improve previous architectures to tackle the communication bottleneck problem, such as reduction of the total time of an evolution step, increase of the number of membranes that could run on a processor and reduction of the number of processors.
Resumo:
Multicarrier transmission such as OFDM (orthogonal frequency division multiplexing) is an established technique for radio transmission systems and it can be considered as a promising approach for next generation wireless systems. However, in order to comply with the demand on increasing available data rates in particular in wireless technologies, systems with multiple transmit and receive antennas, also called MIMO (multiple-input multiple-output) systems, have become indispensable for future generations of wireless systems. Due to the strongly increasing demand in high-data rate transmission systems, frequency non-selective MIMO links have reached a state of maturity and frequency selective MIMO links are in the focus of interest. In this field, the combination of MIMO transmission and OFDM can be considered as an essential part of fulfilling the requirements of future generations of wireless systems. However, single-user scenarios have reached a state of maturity. By contrast multiple users' scenarios require substantial further research, where in comparison to ZF (zero-forcing) multiuser transmission techniques, the individual user's channel characteristics are taken into consideration in this contribution. The performed joint optimization of the number of activated MIMO layers and the number of transmitted bits per subcarrier shows that not necessarily all user-specific MIMO layers per subcarrier have to be activated in order to minimize the overall BER under the constraint of a given fixed data throughput.
Resumo:
In order to comply with the demand on increasing available data rates in particular in wireless technologies, systems with multiple transmit and receive antennas, also called MIMO (multiple-input multiple-output) systems, have become indispensable for future generations of wireless systems. Due to the strongly increasing demand in high-data rate transmission systems, frequency non-selective MIMO links have reached a state of maturity and frequency selective MIMO links are in the focus of interest. In this field, the combination of MIMO transmission and OFDM (orthogonal frequency division multiplexing) can be considered as an essential part of fulfilling the requirements of future generations of wireless systems. However, single-user scenarios have reached a state of maturity. By contrast multiple users’ scenarios require substantial further research, where in comparison to ZF (zero-forcing) multiuser transmission techniques, the individual user’s channel characteristics are taken into consideration in this contribution. The performed joint optimization of the number of activated MIMO layers and the number of transmitted bits per subcarrier along with the appropriate allocation of the transmit power shows that not necessarily all user-specific MIMO layers per subcarrier have to be activated in order to minimize the overall BER under the constraint of a given fixed data throughput.