979 resultados para Transmission problem
Resumo:
We consider a joint power control and transmission scheduling problem in wireless networks with average power constraints. While the capacity region of a wireless network is convex, a characterization of this region is a hard problem. We formulate a network utility optimization problem involving time-sharing across different "transmission modes," where each mode corresponds to the set of power levels used in the network. The structure of the optimal solution is a time-sharing across a small set of such modes. We use this structure to develop an efficient heuristic approach to finding a suboptimal solution through column generation iterations. This heuristic approach converges quite fast in simulations, and provides a tool for wireless network planning.
Resumo:
The basic concepts of tuned half-wave lines were covered by Hubert and Gent [1]. In this paper the problem of overvoltages during faults and the stability of the system incorporating such tuned lines are discussed. The type of tuning bank and the line arrangements that will be satisfactory from the point of view of stability are suggested. The behavior of a line tuned by distributed capacitor is analyzed, and its performance is compared with the other type of tuned line.
Resumo:
An analytical analysis of ferroresonance with possible cases of its occurrence in series-and shunt-compensated systems is presented. A term `percentage unstable zoneÿ is defined to compare the jump severity of different nonlinearities. A direct analytical method has been shown to yield complete information. An attempt has been made to find all four critical points: jump-from and jump-to points of ferroresonance jump phenomena. The systems considered for analysis are typical 500 kV transmission systems of various lengths.
Resumo:
In this letter, we propose a scheme to improve the secrecy rate of cooperative networks using Analog Network Coding (ANC). ANC mixes the signals in the air; the desired signal is then separated out, from the mixed signals, at the legitimate receiver using techniques like self interference subtraction and signal nulling, thereby achieving better secrecy rates. Assuming global channel state information, memoryless adversaries and the decode-and-forward strategy, we seek to maximize the average secrecy rate between the source and the destination, subject to an overall power budget. Then, exploiting the structure of the optimization problem, we compute its optimal solution. Finally, we use numerical evaluations to compare our scheme with the conventional approaches.
Resumo:
This analysis is concerned with the calculation of the elastic wave transmission coefficients and coupling loss factors between an arbitrary number of structural components that are coupled at a point. A general approach to the problem is presented and it is demonstrated that the resulting coupling loss factors satisfy reciprocity. A key aspect of the method is the consideration of cylindrical waves in two-dimensional components, and this builds upon recent results regarding the energetics of diffuse wavefields when expressed in cylindrical coordinates. Specific details of the method are given for beam and thin plate components, and a number of examples are presented. © 2002 Acoustical Society of America.
Resumo:
It is a neural network truth universally acknowledged, that the signal transmitted to a target node must be equal to the product of the path signal times a weight. Analysis of catastrophic forgetting by distributed codes leads to the unexpected conclusion that this universal synaptic transmission rule may not be optimal in certain neural networks. The distributed outstar, a network designed to support stable codes with fast or slow learning, generalizes the outstar network for spatial pattern learning. In the outstar, signals from a source node cause weights to learn and recall arbitrary patterns across a target field of nodes. The distributed outstar replaces the outstar source node with a source field, of arbitrarily many nodes, where the activity pattern may be arbitrarily distributed or compressed. Learning proceeds according to a principle of atrophy due to disuse whereby a path weight decreases in joint proportion to the transmittcd path signal and the degree of disuse of the target node. During learning, the total signal to a target node converges toward that node's activity level. Weight changes at a node are apportioned according to the distributed pattern of converging signals three types of synaptic transmission, a product rule, a capacity rule, and a threshold rule, are examined for this system. The three rules are computationally equivalent when source field activity is maximally compressed, or winner-take-all when source field activity is distributed, catastrophic forgetting may occur. Only the threshold rule solves this problem. Analysis of spatial pattern learning by distributed codes thereby leads to the conjecture that the optimal unit of long-term memory in such a system is a subtractive threshold, rather than a multiplicative weight.
Resumo:
The problem of diffraction of an optical wave by a 2D periodic metal aperture array with square, circular, and ring apertures is solved with allowance for the finite permittivity of a metal in the optical band. The correctness of the obtained results is verified through comparison with experimental data. It is shown that the transmission coefficient can be substantially greater than the corresponding value reached in the case of diffraction by a grating in a perfectly conducting screen.
Resumo:
We consider the problem of secure transmission in two-hop amplify-and-forward untrusted relay networks. We analyze the ergodic secrecy capacity (ESC) and present compact expressions for the ESC in the high signal-to-noise ratio regime. We also examine the impact of large scale antenna arrays at either the source or the destination. For large antenna arrays at the source, we confirm that the ESC is solely determined by the channel between the relay and the destination. For very large antenna arrays at the destination, we confirm that the ESC is solely determined by the channel between the source and the relay.
Resumo:
Taenia soliurn taeniasis and cysticercosis are recognized as a major public health problem in Latin America. T. soliurn transmission not only affects the health of the individual, but also social and economic development, perpetuating the cycle of poverty. To determine prevalence rates, population knowledge and risk factors associated with transmission, an epidemiological study was undertaken in the rural community of Jalaca. Two standardized questionnaires were used to collect epidemiological and T. soli urn general knowledge data. Kato-Katz technique and an immunoblot assay (EITB) were used to determine taeniasis and seroprevalence, respectively. In total, 139 individuals belonging to 56 households participated in the study. Household characteristics were consistent with conditions of poverty of rural Honduras: 21.4% had no toilet or latrines, 19.6% had earthen floor, and 51.8% lacked indoor tap water. Pigs were raised in 46.4% of households, of which 70% allowed their pigs roaming freely. A human seroprevalence rate of 18.7% and a taeniasis prevalence rate of 2.4% were found. Only four persons answered correctly 2: 6 out of ten T. soliurn knowledge questions, for an average passing score of 2.9%. In general, a serious gap exists in knowledge regarding how humans acquire the infections, especially neurocysticercosis was identified. After regression analysis, the ability to recognize adult tapeworms and awareness of the clinical importance of taeniasis, were found to be significant risk factors for T. soliurn seropositivity. These results demonstrate a high level of transmission and a low level of kn~,wledge about Taenia soliurn in Jalaca. Consequently, intervention measures integrated with health education are necessary to decrease the burden caused by this parasite.
Resumo:
Problématique : Implanté en 2004 au Bénin, le programme national de prévention de la transmission du VIH de la mère à l’enfant (PTME) semble globalement bien implanté. Toutefois une enquête, en 2005, révèle certaines difficultés, particulièrement au niveau de la couverture du programme: seulement 70 à 75 % des femmes enceintes vues en consultations prénatales ont été dépistées et 33 % des 1150 femmes dépistées séropositives ont accouché suivant le protocole de PTME. En outre, d’un site à un autre, on relève une grande variation dans la couverture en termes de dépistage et de suivi des femmes enceintes infectées. Cette faiblesse dans la couverture nous a amené à questionner le contexte organisationnel dans lequel le programme est implanté. Objectif : L’objectif général de cette thèse est d’analyser l'implantation de la PTME au Bénin. Le premier objectif spécifique consiste à identifier et comprendre les enjeux reliés à la façon de rejoindre les femmes enceintes dans le cadre du dépistage. Le second consiste à comprendre le contexte d’implantation et son influence sur la mise en œuvre de la PTME. Méthodologie : Cette recherche évaluative s’appuie sur une étude de cas. Six maternités ont été sélectionnées avec le souhait de représenter les différents contextes d’organisation des services. Les données ont été collectées par observation non participante, entrevues semi-dirigées (n=41) réalisées avec des prestataires de services, analyse documentaire des rapports d’activités des maternités et par questionnaires administrés aux femmes enceintes en consultations prénatales (n=371). Résultats : Le premier article a permis d'apprécier le caractère libre et éclairé du consentement au dépistage. Une majorité des femmes enceintes, suivies dans les centres privés, ont été dépistées sans être effectivement informées de la PTME alors que les femmes fréquentant les autres maternités connaissent mieux les composantes de la PTME. Le caractère volontaire du consentement des femmes est généralement respecté sur tous les sites. Le deuxième article porte sur l'analyse de la qualité du conseil pré-test. Seulement 54% des femmes enceintes ont participé à un conseil en groupe et 80% à un conseil individuel. Dans les centres où sont dispensées des séances d'information de groupe, la qualité est meilleure que dans les centres qui dispensent un conseil individuel exclusif. Le troisième article analyse l'influence du contexte d'implantation sur la mise en œuvre du programme. Parmi les facteurs qui contribuent favorablement à la mise en œuvre on relève la proximité d’un centre de référence et la coordination des activités de PTME dans une zone géographique, la responsabilisation du prestataire dédié à la PTME, la supervision formative régulière accompagnée de séances de discussion collective et l’implication des médiatrices dans la recherche active des perdues de vues. A l’opposé, la responsabilisation des médiatrices pour la réalisation du conseil individuel et du dépistage ne favorise pas une bonne mise en œuvre de la PTME. Conclusion : Nos résultats montrent qu'il est possible de jouer sur l'organisation des services de santé dans le cadre du programme du PTME pour améliorer la façon dont le programme est implanté tant dans les centres privés que publics, sans que cela ne représente un ajout net de ressources. C'est le cas de l’amélioration de la qualité du conseil et du dépistage, de l’implantation du processus interne d’apprentissage organisationnel et de la coordination des services.
Resumo:
Nous sommes quotidiennement envahis pour d’innombrables messages électroniques non sollicités, qu’il s’agisse d’annonces publicitaires, de virus ou encore de ce qu’on appelle désormais les métavirus. Ces derniers sont des canulars transmis aux internautes leur suggérant de poser tel ou tel geste, lequel causera des dommages plus ou moins importants au système de l’utilisateur. L’auteur se penche sur la problématique que suscitent ces métavirus au niveau de la responsabilité civile de leurs émetteurs. Il en vient à la conclusion que ce régime, bien qu’applicable en théorie, demeure mal adapté au problème notamment au niveau de la preuve des éléments de la responsabilité civile. Il faut d’abord établir la capacité de discernement (ou l’incapacité) de l’émetteur, la connaissance ou non de cet état par le destinataire et la preuve d’un comportement fautif de la part de l’émetteur voire même des deux parties. Reste à savoir quelle aurait été l’attitude raisonnable dans la situation. À noter que la victime pourrait être trouvée partiellement responsable pour ses propres dommages. Reste alors à prouver le lien de causalité entre l’acte et le dommage ce qui, compte tenu de la situation factuelle, peut s’avérer une tâche ardue. L’auteur conclut que l’opportunité d’un tel recours est très discutable puisque les coûts sont disproportionnés aux dommages et car les chances pour qu’un juge retienne la responsabilité de celui qui a envoyé le métavirus sont assez faibles. La meilleure solution, ajoute-t-il, reste la prudence.
Resumo:
Reinforcement Learning (RL) refers to a class of learning algorithms in which learning system learns which action to take in different situations by using a scalar evaluation received from the environment on performing an action. RL has been successfully applied to many multi stage decision making problem (MDP) where in each stage the learning systems decides which action has to be taken. Economic Dispatch (ED) problem is an important scheduling problem in power systems, which decides the amount of generation to be allocated to each generating unit so that the total cost of generation is minimized without violating system constraints. In this paper we formulate economic dispatch problem as a multi stage decision making problem. In this paper, we also develop RL based algorithm to solve the ED problem. The performance of our algorithm is compared with other recent methods. The main advantage of our method is it can learn the schedule for all possible demands simultaneously.
Resumo:
This paper presents Reinforcement Learning (RL) approaches to Economic Dispatch problem. In this paper, formulation of Economic Dispatch as a multi stage decision making problem is carried out, then two variants of RL algorithms are presented. A third algorithm which takes into consideration the transmission losses is also explained. Efficiency and flexibility of the proposed algorithms are demonstrated through different representative systems: a three generator system with given generation cost table, IEEE 30 bus system with quadratic cost functions, 10 generator system having piecewise quadratic cost functions and a 20 generator system considering transmission losses. A comparison of the computation times of different algorithms is also carried out.
Resumo:
This paper presents a parallel genetic algorithm to the Steiner Problem in Networks. Several previous papers have proposed the adoption of GAs and others metaheuristics to solve the SPN demonstrating the validity of their approaches. This work differs from them for two main reasons: the dimension and the characteristics of the networks adopted in the experiments and the aim from which it has been originated. The reason that aimed this work was namely to build a comparison term for validating deterministic and computationally inexpensive algorithms which can be used in practical engineering applications, such as the multicast transmission in the Internet. On the other hand, the large dimensions of our sample networks require the adoption of a parallel implementation of the Steiner GA, which is able to deal with such large problem instances.
Resumo:
In terrestrial television transmission multiple paths of various lengths can occur between the transmitter and the receiver. Such paths occur because of reflections from objects outside the direct transmission path. The multipath signals arriving at the receiver are all detected along with the intended signal causing time displaced replicas called 'ghosts' to appear on the television picture. With an increasing number of people living within built up areas, ghosting is becoming commonplace and therefore deghosting is becoming increasingly important. This thesis uses a deterministic time domain approach to deghosting, resulting in a simple solution to the problem of removing ghosts. A new video detector is presented which reduces the synchronous detector local oscillator phase error, caused by any practical size of ghost, to a lower level than has ever previously been achieved. From the new detector, dispersion of the video signal is minimised and a known closed-form time domain description of the individual ghost components within the detected video is subsequently obtained. Developed from mathematical descriptions of the detected video, a new specific deghoster filter structure is presented which is capable of removing both inphase (I) and also the phase quadrature (Q) induced ghost signals derived from the VSB operation. The new deghoster filter requires much less hardware than any previous deghoster which is capable of removing both I and Q ghost components. A new channel identification algorithm was also required and written which is based upon simple correlation techniques to find the delay and complex amplitude characteristics of individual ghosts. The result of the channel identification is then passed to the new I and Q deghoster filter for ghost cancellation. Generated from the research work performed for this thesis, five papers have been published. D