936 resultados para Direct-sequence code division multiple access
Resumo:
Plusieurs chercheurs ont constaté un déséquilibre démographique important (ratio homme/femme) en Inde du Nord-Ouest qui serait l’une des conséquences directe de la sélection sexuelle prénatale engendrée par forte préférence pour la naissance des garçons. Dans les villes de Jaipur (Rajasthan) et Gurgaon (Haryana), auprès de femmes mariées issues de milieux socio-économiques aisés, j’ai tenté de comprendre comment les femmes vivent au quotidien la préférence pour les garçons et d’explorer pourquoi elles reproduisent ces préférences et discriminations envers garçons et filles dans leurs pratiques reproductives. Le recours à la sélection sexuelle semble résoudre de nombreuses tensions produites par les interactions entre les changements économiques, l’intensification de la planification familiale, l’accessibilité aux technologies de la santé reproductive et les attentes filiales de la famille. Bien que les femmes subissent de la pression familiale pour concevoir un fils, elles peuvent désirer pour elles-mêmes la naissance de fils, une préférence influencée par la construction identitaire de genre qui associe l’achèvement de la féminité à travers les rôles d’épouse et la maternité de fils. Les pressions pour la naissance d’un garçon émaneraient tout autant des structures sociales que de l’environnement familial immédiat.
Resumo:
Problématique: L’hypertension artérielle essentielle, facteur de risque majeur dans le développement des maladies cardiovasculaires, est un trait multigénique complexe dont les connaissances sur le déterminisme génétique nécessitent d’être approfondies. De nombreux loci à trait quantitatif (QTLs); soit des gènes responsables de faire varier la pression artérielle (PA), ont été identifiés chez l’humain et le modèle animal. Cependant, le mystère plane encore sur la façon dont ces gènes fonctionnent ensemble pour réguler la PA. Hypothèse et objectif: Plutôt qu’une addition de QTLs ayant chacun une action infinitésimale sur la PA, une interaction épistatique entre les gènes serait responsable du phénotype hypertendu. Ainsi, l’étude de cette épistasie entre les gènes impliqués, directement ou indirectement, dans l’homéostasie de la PA nous permettrait d’explorer de nouvelles voies de régulation moléculaire en cause dans cette maladie. Méthodes: Via la réalisation de souches congéniques de rats, où un segment chromosomique provenant d’une souche receveuse hypertendue (Dahl Salt Sensitive, SS/Jr) est remplacé par son homologue provenant d’une souche donneuse normotendue (Lewis, LEW), des QTLs peuvent être mis en évidence. Dans ce contexte, la combinaison de QTLs via la création de doubles ou multiples congéniques constitue la première démonstration fonctionnelle des interactions intergéniques. Résultats: Vingt-sept combinaisons au total nous ont menés à l’appréciation d’une modularisation des QTLs. Ces derniers ont été catégorisés selon deux principaux modules épistatiques (EMs) où les QTLs appartenant à un même EM sont épistatiques entre eux et participent à une même voie régulatrice. Les EMs/cascades agissent alors en parallèle pour réguler la PA. Grâce à l’existence de QTLs ayant des effets opposés sur la PA, nous avons pu établir l’ordre hiérarchique entre trois paires de QTLs. Cependant, lorsque cette suite régulatrice ne peut être déterminée, d’autres approches sont nécessaires. Nos travaux nous ont mené à l’identification d’un QTL situé sur le chromosome 16 du rat (C16QTL), appartenant au EM1 et qui révélerait une nouvelle voie de l’homéostasie de la PA. Le gène retinoblastoma-associated protein 140 (Rap140)/family with sequence similarity 208 member A (Fam208a), présentant une mutation non synonyme entre SS/Jr et LEW est le gène candidat le plus plausible pour représenter C16QTL. Celui-ci code pour un facteur de transcription et semblerait influencer l’expression de Solute carrier family 7 (cationic amino acid transporter, y+ system) member 12 (Slc7a12), spécifiquement et significativement sous exprimé dans les reins de la souche congénique portant C16QTL par rapport à la souche SS/Jr. Rap140/Fam208a agirait comme un inhibiteur de la transcription de Slc7a12 menant à une diminution de la pression chez Lewis. Conclusions: L’architecture complexe de la régulation de la PA se dévoile mettant en scène de nouveaux acteurs, pour la plupart inconnus pour leur implication dans la PA. L’étude de la nouvelle voie de signalisation Rap140/Fam208a - Slc7a12 nous permettra d’approfondir nos connaissances quant à l’homéostasie de la pression artérielle et de l’hypertension chez SS/Jr. À long terme, de nouveaux traitements anti-hypertenseurs, ciblant plus d’une voie de régulation à la fois, pourraient voir le jour.
Resumo:
In the present investigation, an attempt is made to study late Quaternary foraminiferal and pteropod records of the shelf of northern Kerala and to evaluate their potentiality in paleocenographic and paleoclimatic reconstruction. The study gives details of sediment cores, general characteristics of foraminifera and pteropod species recorded from the examined samples and their systematic classification, spatial distribution of Recent foraminifera and pteropods and their response to varying bathymetry, nature of substrate, organic matter content in sediment and hydrography across the shelf. An attempt is also made to establish an integrated chronostratigraphy for the examined core sections. An effort is also made to identify microfaunal criteria useful in biostratigraphic division in shallow marine core sections. An attempt is made to infer various factors responsible for the change in microfaunal assemblage. Reconstruction of sea level changes during the last 36,000 years was attempted based on the pteropod record. The study reveals a bathymetric control on benthic/planktic (BF/PF) foraminiferal and pteropods/planktic foraminiferal (Pt/PF) abundance ratio. Bathymetric distribution pattern of BF/PF ratio is opposite to the (Pt/PF) ratio with decreasing trend of former from the shore across the shelf. Quantitative benthic foraminiferal record in the surficial sediments reveals a positive correlation between the diversity and bathymetry. R-mode cluster analysis performed on 30n significant Recent benthic foraminiferal, determines three major assemblage.
Resumo:
Significant results of our experimental investigations on the dependence of pH on real time transmission characteristics on recording media fabricated by doping PVC with complexed methylene blue are presented. The optimum pH value for faster bleaching was found to be 4×5. In typical applications, the illumination from one side, normal to the surface of this material, initiates a chemical sequence that records the incident light pattern in the polymer. Thus direct imaging can be successfully done on this sample. The recorded letters were very legible with good contrast and no scattering centres. Diffraction efficiency measurements were also carried out on this material.
Resumo:
Significant results of our experimental investigations on the dependence of pH on real time transmission characteristics on recording media fabricated by doping PVC with complexed methylene blue are presented. The optimum pH value for faster bleaching was found to be 4×5. In typical applications, the illumination from one side, normal to the surface of this material, initiates a chemical sequence that records the incident light pattern in the polymer. Thus direct imaging can be successfully done on this sample. The recorded letters were very legible with good contrast and no scattering centres. Diffraction efficiency measurements were also carried out on this material.
Resumo:
Significant results of our experimental investigations on the dependence of pH on real time transmission characteristics on recording media fabricated by doping PVC with complexed methylene blue are presented. The optimum pH value for faster bleaching was found to be 4 . 5. In typical applications, the illumination from one side, normal to the surface of this material, initiates a chemical sequence that records the incident light pattern in the polymer. Thus direct imaging can be successfully done on this sample. The recorded letters were very legible with good contrast and no scattering centres. Diffraction efficiency measurements were also carried out on this material.
Resumo:
This Thesis deals with the fabrication and characterization of novel all-fiber components for access networks. All fiber components offer distinctive advantages due to low forward and backward losses, epoxy free optical path and high power handling. A novel fabrication method for monolithic 1x4 couplers, which are vital components in distributed passive optical networks, is realized. The fabrication method differs from conventional structures with a symmetric coupling profile and hence offers ultra wideband performance and easy process control. New structure for 1x4 couplers, by fusing five fibers is proposed to achieve high uniformity, which gives equivalent uniformity performance to 1x4 planar lightwave splitters, isolation in fused fiber WDM is improved with integration of long period gratings. Packaging techniques of fused couplers are analyzed for long term stability.
Resumo:
This thesis presents analytical and numerical results from studies based on the multiple quantum well laser rate equation model. We address the problem of controlling chaos produced by direct modulation of laser diodes. We consider the delay feedback control methods for this purpose and study their performance using numerical simulation. Besides the control of chaos, control of other nonlinear effects such as quasiperiodicity and bistability using delay feedback methods are also investigated.A number of secure communication schemes based on synchronization of chaos semiconductor lasers have been successfully demonstrated theoretically and experimentally. The current investigations in these field include the study of practical issues on the implementations of such encryption schemes. We theoretically study the issues such as channel delay, phase mismatch and frequency detuning on the synchronization of chaos in directly modulated laser diodes. It would be helpful for designing and implementing chaotic encryption schemes using synchronization of chaos in modulated semiconductor lasers.
Resumo:
Modern computer systems are plagued with stability and security problems: applications lose data, web servers are hacked, and systems crash under heavy load. Many of these problems or anomalies arise from rare program behavior caused by attacks or errors. A substantial percentage of the web-based attacks are due to buffer overflows. Many methods have been devised to detect and prevent anomalous situations that arise from buffer overflows. The current state-of-art of anomaly detection systems is relatively primitive and mainly depend on static code checking to take care of buffer overflow attacks. For protection, Stack Guards and I-leap Guards are also used in wide varieties.This dissertation proposes an anomaly detection system, based on frequencies of system calls in the system call trace. System call traces represented as frequency sequences are profiled using sequence sets. A sequence set is identified by the starting sequence and frequencies of specific system calls. The deviations of the current input sequence from the corresponding normal profile in the frequency pattern of system calls is computed and expressed as an anomaly score. A simple Bayesian model is used for an accurate detection.Experimental results are reported which show that frequency of system calls represented using sequence sets, captures the normal behavior of programs under normal conditions of usage. This captured behavior allows the system to detect anomalies with a low rate of false positives. Data are presented which show that Bayesian Network on frequency variations responds effectively to induced buffer overflows. It can also help administrators to detect deviations in program flow introduced due to errors.
Resumo:
Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.
Resumo:
Data centre is a centralized repository,either physical or virtual,for the storage,management and dissemination of data and information organized around a particular body and nerve centre of the present IT revolution.Data centre are expected to serve uniinterruptedly round the year enabling them to perform their functions,it consumes enormous energy in the present scenario.Tremendous growth in the demand from IT Industry made it customary to develop newer technologies for the better operation of data centre.Energy conservation activities in data centre mainly concentrate on the air conditioning system since it is the major mechanical sub-system which consumes considerable share of the total power consumption of the data centre.The data centre energy matrix is best represented by power utilization efficiency(PUE),which is defined as the ratio of the total facility power to the IT equipment power.Its value will be greater than one and a large value of PUE indicates that the sub-systems draw more power from the facility and the performance of the data will be poor from the stand point of energy conservation. PUE values of 1.4 to 1.6 are acievable by proper design and management techniques.Optimizing the air conditioning systems brings enormous opportunity in bringing down the PUE value.The air conditioning system can be optimized by two approaches namely,thermal management and air flow management.thermal management systems are now introduced by some companies but they are highly sophisticated and costly and do not catch much attention in the thumb rules.
Resumo:
This work identifies the importance of plenum pressure on the performance of the data centre. The present methodology followed in the industry considers the pressure drop across the tile as a dependant variable, but it is shown in this work that this is the only one independent variable that is responsible for the entire flow dynamics in the data centre, and any design or assessment procedure must consider the pressure difference across the tile as the primary independent variable. This concept is further explained by the studies on the effect of dampers on the flow characteristics. The dampers have found to introduce an additional pressure drop there by reducing the effective pressure drop across the tile. The effect of damper is to change the flow both in quantitative and qualitative aspects. But the effect of damper on the flow in the quantitative aspect is only considered while using the damper as an aid for capacity control. Results from the present study suggest that the use of dampers must be avoided in data centre and well designed tiles which give required flow rates must be used in the appropriate locations. In the present study the effect of hot air recirculation is studied with suitable assumptions. It identifies that, the pressure drop across the tile is a dominant parameter which governs the recirculation. The rack suction pressure of the hardware along with the pressure drop across the tile determines the point of recirculation in the cold aisle. The positioning of hardware in the racks play an important role in controlling the recirculation point. The present study is thus helpful in the design of data centre air flow, based on the theory of jets. The air flow can be modelled both quantitatively and qualitatively based on the results.
Resumo:
In the present study the effect of hot air recirculation is studied with suitable assumptions. It identifies that, the pressure drop across the tile is a dominant parameter which governs the recirculation. The rack suction pressure of the hardware along with the pressure drop across the tile determines the point of recirculation in the cold aisle. The positioning of hardware in the racks play an important role in controlling the recirculation point. The present study is thus helpful in the design of data centre air flow, based on the theory of jets. The air flow can be modelled both quantitatively and qualitatively based on the results
Resumo:
The modern telecommunication industry demands higher capacity networks with high data rate. Orthogonal frequency division multiplexing (OFDM) is a promising technique for high data rate wireless communications at reasonable complexity in wireless channels. OFDM has been adopted for many types of wireless systems like wireless local area networks such as IEEE 802.11a, and digital audio/video broadcasting (DAB/DVB). The proposed research focuses on a concatenated coding scheme that improve the performance of OFDM based wireless communications. It uses a Redundant Residue Number System (RRNS) code as the outer code and a convolutional code as the inner code. The bit error rate (BER) performances of the proposed system under different channel conditions are investigated. These include the effect of additive white Gaussian noise (AWGN), multipath delay spread, peak power clipping and frame start synchronization error. The simulation results show that the proposed RRNS-Convolutional concatenated coding (RCCC) scheme provides significant improvement in the system performance by exploiting the inherent properties of RRNS.
Resumo:
The present Thesis looks at the problem of protein folding using Monte Carlo and Langevin simulations, three topics in protein folding have been studied: 1) the effect of confining potential barriers, 2) the effect of a static external field and 3) the design of amino acid sequences which fold in a short time and which have a stable native state (global minimum). Regarding the first topic, we studied the confinement of a small protein of 16 amino acids known as 1NJ0 (PDB code) which has a beta-sheet structure as a native state. The confinement of proteins occurs frequently in the cell environment. Some molecules called Chaperones, present in the cytoplasm, capture the unfolded proteins in their interior and avoid the formation of aggregates and misfolded proteins. This mechanism of confinement mediated by Chaperones is not yet well understood. In the present work we considered two kinds of potential barriers which try to mimic the confinement induced by a Chaperon molecule. The first kind of potential was a purely repulsive barrier whose only effect is to create a cavity where the protein folds up correctly. The second kind of potential was a barrier which includes both attractive and repulsive effects. We performed Wang-Landau simulations to calculate the thermodynamical properties of 1NJ0. From the free energy landscape plot we found that 1NJ0 has two intermediate states in the bulk (without confinement) which are clearly separated from the native and the unfolded states. For the case of the purely repulsive barrier we found that the intermediate states get closer to each other in the free energy landscape plot and eventually they collapse into a single intermediate state. The unfolded state is more compact, compared to that in the bulk, as the size of the barrier decreases. For an attractive barrier modifications of the states (native, unfolded and intermediates) are observed depending on the degree of attraction between the protein and the walls of the barrier. The strength of the attraction is measured by the parameter $\epsilon$. A purely repulsive barrier is obtained for $\epsilon=0$ and a purely attractive barrier for $\epsilon=1$. The states are changed slightly for magnitudes of the attraction up to $\epsilon=0.4$. The disappearance of the intermediate states of 1NJ0 is already observed for $\epsilon =0.6$. A very high attractive barrier ($\epsilon \sim 1.0$) produces a completely denatured state. In the second topic of this Thesis we dealt with the interaction of a protein with an external electric field. We demonstrated by means of computer simulations, specifically by using the Wang-Landau algorithm, that the folded, unfolded, and intermediate states can be modified by means of a field. We have found that an external field can induce several modifications in the thermodynamics of these states: for relatively low magnitudes of the field ($<2.06 \times 10^8$ V/m) no major changes in the states are observed. However, for higher magnitudes than ($6.19 \times 10^8$ V/m) one observes the appearance of a new native state which exhibits a helix-like structure. In contrast, the original native state is a $\beta$-sheet structure. In the new native state all the dipoles in the backbone structure are aligned parallel to the field. The design of amino acid sequences constitutes the third topic of the present work. We have tested the Rate of Convergence criterion proposed by D. Gridnev and M. Garcia ({\it work unpublished}). We applied it to the study of off-lattice models. The Rate of Convergence criterion is used to decide if a certain sequence will fold up correctly within a relatively short time. Before the present work, the common way to decide if a certain sequence was a good/bad folder was by performing the whole dynamics until the sequence got its native state (if it existed), or by studying the curvature of the potential energy surface. There are some difficulties in the last two approaches. In the first approach, performing the complete dynamics for hundreds of sequences is a rather challenging task because of the CPU time needed. In the second approach, calculating the curvature of the potential energy surface is possible only for very smooth surfaces. The Rate of Convergence criterion seems to avoid the previous difficulties. With this criterion one does not need to perform the complete dynamics to find the good and bad sequences. Also, the criterion does not depend on the kind of force field used and therefore it can be used even for very rugged energy surfaces.