894 resultados para probability of error
Resumo:
La présente étude porte sur les effets de la familiarité dans l’identification d’individus en situation de parade vocale. La parade vocale est une technique inspirée d’une procédure paralégale d’identification visuelle d’individus. Elle consiste en la présentation de plusieurs voix avec des aspects acoustiques similaires définis selon des critères reconnus dans la littérature. L’objectif principal de la présente étude était de déterminer si la familiarité d’une voix dans une parade vocale peut donner un haut taux d’identification correcte (> 99 %) de locuteurs. Cette étude est la première à quantifier le critère de familiarité entre l’identificateur et une personne associée à « une voix-cible » selon quatre paramètres liés aux contacts (communications) entre les individus, soit la récence du contact (à quand remonte la dernière rencontre avec l’individu), la durée et la fréquence moyenne du contact et la période pendant laquelle avaient lieu les contacts. Trois différentes parades vocales ont été élaborées, chacune contenant 10 voix d’hommes incluant une voix-cible pouvant être très familière; ce degré de familiarité a été établi selon un questionnaire. Les participants (identificateurs, n = 44) ont été sélectionnés selon leur niveau de familiarité avec la voix-cible. Toutes les voix étaient celles de locuteurs natifs du franco-québécois et toutes avaient des fréquences fondamentales moyennes similaires à la voix-cible (à un semi-ton près). Aussi, chaque parade vocale contenait des énoncés variant en longueur selon un nombre donné de syllabes (1, 4, 10, 18 syll.). Les résultats démontrent qu’en contrôlant le degré de familiarité et avec un énoncé de 4 syllabes ou plus, on obtient un taux d’identification avec une probabilité exacte d’erreur de p < 1 x 10-12. Ces taux d’identification dépassent ceux obtenus actuellement avec des systèmes automatisés.
Resumo:
The hazards associated with major accident hazard (MAH) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identitication and quantification of the hazards associated with chemical industries. This research work presents the results of a consequence analysis carried out to assess the damage potential of the hazardous material storages in an industrial area of central Kerala, India. A survey carried out in the major accident hazard (MAH) units in the industrial belt revealed that the major hazardous chemicals stored by the various industrial units are ammonia, chlorine, benzene, naphtha, cyclohexane, cyclohexanone and LPG. The damage potential of the above chemicals is assessed using consequence modelling. Modelling of pool fires for naphtha, cyclohexane, cyclohexanone, benzene and ammonia are carried out using TNO model. Vapor cloud explosion (VCE) modelling of LPG, cyclohexane and benzene are carried out using TNT equivalent model. Boiling liquid expanding vapor explosion (BLEVE) modelling of LPG is also carried out. Dispersion modelling of toxic chemicals like chlorine, ammonia and benzene is carried out using the ALOHA air quality model. Threat zones for different hazardous storages are estimated based on the consequence modelling. The distance covered by the threat zone was found to be maximum for chlorine release from a chlor-alkali industry located in the area. The results of consequence modelling are useful for the estimation of individual risk and societal risk in the above industrial area.Vulnerability assessment is carried out using probit functions for toxic, thermal and pressure loads. Individual and societal risks are also estimated at different locations. Mapping of threat zones due to different incident outcome cases from different MAH industries is done with the help of Are GIS.Fault Tree Analysis (FTA) is an established technique for hazard evaluation. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. However it is often difficult to estimate precisely the failure probability of the components due to insufficient data or vague characteristics of the basic event. It has been reported that availability of the failure probability data pertaining to local conditions is surprisingly limited in India. This thesis outlines the generation of failure probability values of the basic events that lead to the release of chlorine from the storage and filling facility of a major chlor-alkali industry located in the area using expert elicitation and proven fuzzy logic. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor invo1ved in expert elicitation .
Resumo:
Global Positioning System (GPS), with its high integrity, continuous availability and reliability, revolutionized the navigation system based on radio ranging. With four or more GPS satellites in view, a GPS receiver can find its location anywhere over the globe with accuracy of few meters. High accuracy - within centimeters, or even millimeters is achievable by correcting the GPS signal with external augmentation system. The use of satellite for critical application like navigation has become a reality through the development of these augmentation systems (like W AAS, SDCM, and EGNOS, etc.) with a primary objective of providing essential integrity information needed for navigation service in their respective regions. Apart from these, many countries have initiated developing space-based regional augmentation systems like GAGAN and IRNSS of India, MSAS and QZSS of Japan, COMPASS of China, etc. In future, these regional systems will operate simultaneously and emerge as a Global Navigation Satellite System or GNSS to support a broad range of activities in the global navigation sector.Among different types of error sources in the GPS precise positioning, the propagation delay due to the atmospheric refraction is a limiting factor on the achievable accuracy using this system. The WADGPS, aimed for accurate positioning over a large area though broadcasts different errors involved in GPS ranging including ionosphere and troposphere errors, due to the large temporal and spatial variations in different atmospheric parameters especially in lower atmosphere (troposphere), the use of these broadcasted tropospheric corrections are not sufficiently accurate. This necessitated the estimation of tropospheric error based on realistic values of tropospheric refractivity. Presently available methodologies for the estimation of tropospheric delay are mostly based on the atmospheric data and GPS measurements from the mid-latitude regions, where the atmospheric conditions are significantly different from that over the tropics. No such attempts were made over the tropics. In a practical approach when the measured atmospheric parameters are not available analytical models evolved using data from mid-latitudes for this purpose alone can be used. The major drawback of these existing models is that it neglects the seasonal variation of the atmospheric parameters at stations near the equator. At tropics the model underestimates the delay in quite a few occasions. In this context, the present study is afirst and major step towards the development of models for tropospheric delay over the Indian region which is a prime requisite for future space based navigation program (GAGAN and IRNSS). Apart from the models based on the measured surface parameters, a region specific model which does not require any measured atmospheric parameter as input, but depends on latitude and day of the year was developed for the tropical region with emphasis on Indian sector.Large variability of atmospheric water vapor content in short spatial and/or temporal scales makes its measurement rather involved and expensive. A local network of GPS receivers is an effective tool for water vapor remote sensing over the land. This recently developed technique proves to be an effective tool for measuring PW. The potential of using GPS to estimate water vapor in the atmosphere at all-weather condition and with high temporal resolution is attempted. This will be useful for retrieving columnar water vapor from ground based GPS data. A good network of GPS could be a major source of water vapor information for Numerical Weather Prediction models and could act as surrogate to the data gap in microwave remote sensing for water vapor over land.
Resumo:
Warships are generally sleek, slender with V shaped sections and block coefficient below 0.5, compared to fuller forms and higher values for commercial ships. They normally operate in the higher Froude number regime, and the hydrodynamic design is primarily aimed at achieving higher speeds with the minimum power. Therefore the structural design and analysis methods are different from those for commercial ships. Certain design guidelines have been given in documents like Naval Engineering Standards and one of the new developments in this regard is the introduction of classification society rules for the design of warships.The marine environment imposes subjective and objective uncertainties on ship structure. The uncertainties in loads, material properties etc.,. make reliable predictions of ship structural response a difficult task. Strength, stiffness and durability criteria for warship structures can be established by investigations on elastic analysis, ultimate strength analysis and reliability analysis. For analysis of complicated warship structures, special means and valid approximations are required.Preliminary structural design of a frigate size ship has been carried out . A finite element model of the hold model, representative of the complexities in the geometric configuration has been created using the finite element software NISA. Two other models representing the geometry to a limited extent also have been created —- one with two transverse frames and the attached plating alongwith the longitudinal members and the other representing the plating and longitudinal stiffeners between two transverse frames. Linear static analysis of the three models have been carried out and each one with three different boundary conditions. The structural responses have been checked for deflections and stresses against the permissible values. The structure has been found adequate in all the cases. The stresses and deflections predicted by the frame model are comparable with those of the hold model. But no such comparison has been realized for the interstiffener plating model with the other two models.Progressive collapse analyses of the models have been conducted for the three boundary conditions, considering geometric nonlinearity and then combined geometric and material nonlinearity for the hold and the frame models. von Mises — lllyushin yield criteria with elastic-perfectly plastic stress-strain curve has been chosen. ln each case, P-Delta curves have been generated and the ultimate load causing failure (ultimate load factor) has been identified as a multiple of the design load specified by NES.Reliability analysis of the hull module under combined geometric and material nonlinearities have been conducted. The Young's Modulus and the shell thickness have been chosen as the variables. Randomly generated values have been used in the analysis. First Order Second Moment has been used to predict the reliability index and thereafter, the probability of failure. The values have been compared against standard values published in literature.
Resumo:
Usually, under rainfed conditions the growing period exists in the humid months. Hence, for agricultural planning knowledge about the variabilities of the duration of the humid seasons are very much needed. The crucial problem affecting agriculture is the persistency in receiving a specific amount of rainfall during a short period. Agricultural operations and decision making are highly dependent on the probability of receiving given amounts of rainfall; such periods should match the water requirements of different phenological phases of the crops. While prolonged dry periods during sensitive phases are detrimental to their growth and lower the yields, excess of rainfall causes soil erosion and loss of soil nutrients. These factors point to the importance of evaluation of wet and dry spells. In this study the weekly rainfall data have been analysed to estimate the probability of wet and dry periods at all selected stations of each agroclimatic zone and the crop growth potentials of the growing seasons have been analysed. The thesis consists of six Chapters.
Resumo:
Econometrics is a young science. It developed during the twentieth century in the mid-1930’s, primarily after the World War II. Econometrics is the unification of statistical analysis, economic theory and mathematics. The history of econometrics can be traced to the use of statistical and mathematics analysis in economics. The most prominent contributions during the initial period can be seen in the works of Tinbergen and Frisch, and also that of Haavelmo in the 1940's through the mid 1950's. Right from the rudimentary application of statistics to economic data, like the use of laws of error through the development of least squares by Legendre, Laplace, and Gauss, the discipline of econometrics has later on witnessed the applied works done by Edge worth and Mitchell. A very significant mile stone in its evolution has been the work of Tinbergen, Frisch, and Haavelmo in their development of multiple regression and correlation analysis. They used these techniques to test different economic theories using time series data. In spite of the fact that some predictions based on econometric methodology might have gone wrong, the sound scientific nature of the discipline cannot be ignored by anyone. This is reflected in the economic rationale underlying any econometric model, statistical and mathematical reasoning for the various inferences drawn etc. The relevance of econometrics as an academic discipline assumes high significance in the above context. Because of the inter-disciplinary nature of econometrics (which is a unification of Economics, Statistics and Mathematics), the subject can be taught at all these broad areas, not-withstanding the fact that most often Economics students alone are offered this subject as those of other disciplines might not have adequate Economics background to understand the subject. In fact, even for technical courses (like Engineering), business management courses (like MBA), professional accountancy courses etc. econometrics is quite relevant. More relevant is the case of research students of various social sciences, commerce and management. In the ongoing scenario of globalization and economic deregulation, there is the need to give added thrust to the academic discipline of econometrics in higher education, across various social science streams, commerce, management, professional accountancy etc. Accordingly, the analytical ability of the students can be sharpened and their ability to look into the socio-economic problems with a mathematical approach can be improved, and enabling them to derive scientific inferences and solutions to such problems. The utmost significance of hands-own practical training on the use of computer-based econometric packages, especially at the post-graduate and research levels need to be pointed out here. Mere learning of the econometric methodology or the underlying theories alone would not have much practical utility for the students in their future career, whether in academics, industry, or in practice This paper seeks to trace the historical development of econometrics and study the current status of econometrics as an academic discipline in higher education. Besides, the paper looks into the problems faced by the teachers in teaching econometrics, and those of students in learning the subject including effective application of the methodology in real life situations. Accordingly, the paper offers some meaningful suggestions for effective teaching of econometrics in higher education
Resumo:
The present study described about the interaction of a two level atom and squeezed field with time varying frequency. By applying a sinusoidal variation in the frequency of the field, the randomness in population inversion is reduced and the collapses and periodic revivals are regained. Quantum optics is an emerging field in physics which mainly deals with the interaction of atoms with quantised electromagnetic fields. Jaynes-Cummings Model (JCM) is a key model among them, which describes the interaction between a two level atom and a single mode radiation field. Here the study begins with a brief history of light, atom and their interactions. Also discussed the interaction between atoms and electromagnetic fields. The study suggest a method to manipulate the population inversion due to interaction and control the randomness in it, by applying a time dependence on the frequency of the interacting squeezed field.The change in behaviour of the population inversion due to the presence of a phase factor in the applied frequency variation is explained here.This study also describes the interaction between two level atom and electromagnetic field in nonlinear Kerr medium. It deals with atomic and field state evolution in a coupled cavity system. Our results suggest a new method to control and manipulate the population of states in two level atom radiation interaction,which is very essential for quantum information processing.We have also studied the variation of atomic population inversion with time, when a two level atom interacts with light field, where the light field has a sinusoidal frequency variation with a constant phase. In both coherent field and squeezed field cases, the population inversion variation is completely different from the phase zero frequency modulation case. It is observed that in the presence of a non zero phase φ, the population inversion oscillates sinusoidally.Also the collapses and revivals gradually disappears when φ increases from 0 to π/2. When φ = π/2 the evolution of population inversion is identical to the case when a two level atom interacts with a Fock state. Thus, by applying a phase shifted frequency modulation one can induce sinusoidal oscillations of atomic inversion in linear medium, those normally observed in Kerr medium. We noticed that the entanglement between the atom and field can be controlled by varying the period of the field frequency fluctuations. The system has been solved numerically and the behaviour of it for different initial conditions and different susceptibility values are analysed. It is observed that, for weak cavity coupling the effect of susceptibility is minimal. In cases of strong cavity coupling, susceptibility factor modifies the nature in which the probability oscillates with time. Effect of susceptibility on probability of states is closely related to the initial state of the system.
Resumo:
Digitales stochastisches Magnetfeld-Sensorarray Stefan Rohrer Im Rahmen eines mehrjährigen Forschungsprojektes, gefördert von der Deutschen Forschungsgesellschaft (DFG), wurden am Institut für Mikroelektronik (IPM) der Universität Kassel digitale Magnetfeldsensoren mit einer Breite bis zu 1 µm entwickelt. Die vorliegende Dissertation stellt ein aus diesem Forschungsprojekt entstandenes Magnetfeld-Sensorarray vor, das speziell dazu entworfen wurde, um digitale Magnetfelder schnell und auf minimaler Fläche mit einer guten räumlichen und zeitlichen Auflösung zu detektieren. Der noch in einem 1,0µm-CMOS-Prozess gefertigte Test-Chip arbeitet bis zu einer Taktfrequenz von 27 MHz bei einem Sensorabstand von 6,75 µm. Damit ist er das derzeit kleinste und schnellste digitale Magnetfeld-Sensorarray in einem Standard-CMOS-Prozess. Konvertiert auf eine 0,09µm-Technologie können Frequenzen bis 1 GHz erreicht werden bei einem Sensorabstand von unter 1 µm. In der Dissertation werden die wichtigsten Ergebnisse des Projekts detailliert beschrieben. Basis des Sensors ist eine rückgekoppelte Inverter-Anordnung. Als magnetfeldsensitives Element dient ein auf dem Hall-Effekt basierender Doppel-Drain-MAGFET, der das Verhalten der Kippschaltung beeinflusst. Aus den digitalen Ausgangsdaten kann die Stärke und die Polarität des Magnetfelds bestimmt werden. Die Gesamtanordnung bildet einen stochastischen Magnetfeld-Sensor. In der Arbeit wird ein Modell für das Kippverhalten der rückgekoppelten Inverter präsentiert. Die Rauscheinflüsse des Sensors werden analysiert und in einem stochastischen Differentialgleichungssystem modelliert. Die Lösung der stochastischen Differentialgleichung zeigt die Entwicklung der Wahrscheinlichkeitsverteilung des Ausgangssignals über die Zeit und welche Einflussfaktoren die Fehlerwahrscheinlichkeit des Sensors beeinflussen. Sie gibt Hinweise darauf, welche Parameter für das Design und Layout eines stochastischen Sensors zu einem optimalen Ergebnis führen. Die auf den theoretischen Berechnungen basierenden Schaltungen und Layout-Komponenten eines digitalen stochastischen Sensors werden in der Arbeit vorgestellt. Aufgrund der technologisch bedingten Prozesstoleranzen ist für jeden Detektor eine eigene kompensierende Kalibrierung erforderlich. Unterschiedliche Realisierungen dafür werden präsentiert und bewertet. Zur genaueren Modellierung wird ein SPICE-Modell aufgestellt und damit für das Kippverhalten des Sensors eine stochastische Differentialgleichung mit SPICE-bestimmten Koeffizienten hergeleitet. Gegenüber den Standard-Magnetfeldsensoren bietet die stochastische digitale Auswertung den Vorteil einer flexiblen Messung. Man kann wählen zwischen schnellen Messungen bei reduzierter Genauigkeit und einer hohen lokalen Auflösung oder einer hohen Genauigkeit bei der Auswertung langsam veränderlicher Magnetfelder im Bereich von unter 1 mT. Die Arbeit präsentiert die Messergebnisse des Testchips. Die gemessene Empfindlichkeit und die Fehlerwahrscheinlichkeit sowie die optimalen Arbeitspunkte und die Kennliniencharakteristik werden dargestellt. Die relative Empfindlichkeit der MAGFETs beträgt 0,0075/T. Die damit erzielbaren Fehlerwahrscheinlichkeiten werden in der Arbeit aufgelistet. Verglichen mit dem theoretischen Modell zeigt das gemessene Kippverhalten der stochastischen Sensoren eine gute Übereinstimmung. Verschiedene Messungen von analogen und digitalen Magnetfeldern bestätigen die Anwendbarkeit des Sensors für schnelle Magnetfeldmessungen bis 27 MHz auch bei kleinen Magnetfeldern unter 1 mT. Die Messungen der Sensorcharakteristik in Abhängigkeit von der Temperatur zeigen, dass die Empfindlichkeit bei sehr tiefen Temperaturen deutlich steigt aufgrund der Abnahme des Rauschens. Eine Zusammenfassung und ein ausführliches Literaturverzeichnis geben einen Überblick über den Stand der Technik.
Resumo:
Protecting the quality of children growth and development becomes a supreme qualification for the betterment of a nation. Double burden child malnutrition is emerging worldwide which might have a strong influence to the quality of child brain development and could not be paid-off on later life. Milk places a notable portion during the infancy and childhood. Thus, the deep insight on milk consumption pattern might explain the phenomenon of double burden child malnutrition correlated to the cognitive impairments. Objective: Current study is intended (1) to examine the current face of Indonesian double burden child malnutrition: a case study in Bogor, West Java, Indonesia, (2) to investigate the association of this phenomenon with child brain development, and (3) to examine the contribution of socioeconomic status and milk consumption on this phenomenon so that able to formulate some possible solutions to encounter this problem. Design: A cross-sectional study using a structured coded questionnaire was conducted among 387 children age 5-6 years old and their parents from 8 areas in Bogor, West-Java, Indonesia on November 2012 to December 2013, to record some socioeconomic status, anthropometric measurements, and history of breast feeding. Diet and probability of milk intake was assessed by two 24 h dietary recalls and food frequency questionnaire (FFQ). Usual daily milk intake was calculated using Multiple Source Method (MSM). Some brain development indicators (IQ, EQ, learning, and memory ability) using Projective Multi-phase Orientation method was also executed to learn the correlation between double burden child malnutrition and some brain development indicator. Results and conclusions: A small picture of child double burden malnutrition is shown in Bogor, West Java, Indonesia, where prevalence of Severe Acute Malnutrition (SAM) is 27.1%, Moderate Acute Malnutrition (MAM) is 24.9%, and overnutrition is 7.7%. This phenomenon proves to impair the child brain development. The malnourished children, both under- and over- nourished children have significantly (P-value<0.05) lower memory ability compared to the normal children (memory score, N; SAM = 45.2, 60; MAM = 48.5, 61; overweight = 48.4, 43; obesity = 47.9, 60; normal = 52.4, 163). The plausible reasons behind these evidences are the lack of nutrient intake during the sprout growth period on undernourished children or increasing adiposity on overnourished children might influence the growth of hippocampus area which responsible to the memory ability. Either undernutrition or overnutrition, the preventive action on this problem is preferable to avoid ongoing cognitive performance loss of the next generation. Some possible solutions for this phenomenon are promoting breast feeding initiation and exclusive breast feeding practices for infants, supporting the consumption of a normal portion of milk (250 to 500 ml per day) for children, and breaking the chain of poverty by socioeconomic improvement. And, the national food security becomes the fundamental point for the betterment of the next. In the global context, the causes of under- and over- nutrition have to be opposed through integrated and systemic approaches for a better quality of the next generation of human beings.
Resumo:
With the present research, we investigated effects of existential threat on veracity judgments. According to several meta-analyses, people judge potentially deceptive messages of other people as true rather than as false (so-called truth bias). This judgmental bias has been shown to depend on how people weigh the error of judging a true message as a lie (error 1) and the error of judging a lie as a true message (error 2). The weight of these errors has been further shown to be affected by situational variables. Given that research on terror management theory has found evidence that mortality salience (MS) increases the sensitivity toward the compliance of cultural norms, especially when they are of focal attention, we assumed that when the honesty norm is activated, MS affects judgmental error weighing and, consequently, judgmental biases. Specifically, activating the norm of honesty should decrease the weight of error 1 (the error of judging a true message as a lie) and increase the weight of error 2 (the error of judging a lie as a true message) when mortality is salient. In a first study, we found initial evidence for this assumption. Furthermore, the change in error weighing should reduce the truth bias, automatically resulting in better detection accuracy of actual lies and worse accuracy of actual true statements. In two further studies, we manipulated MS and honesty norm activation before participants judged several videos containing actual truths or lies. Results revealed evidence for our prediction. Moreover, in Study 3, the truth bias was increased after MS when group solidarity was previously emphasized.
Resumo:
Land tenure insecurity is widely perceived as a disincentive for long-term land improvement investment hence the objective of this paper is to evaluate how tenure (in)security associated with different land use arrangements in Ghana influenced households’ plot level investment decisions and choices. The paper uses data from the Farmer-Based Organisations (FBO) survey. The FBO survey collected information from 2,928 households across three ecological zones of Ghana using multistaged cluster sampling. Probit and Tobit models tested the effects of land tenancy and ownership arrangements on households’ investment behaviour while controlling other factors. It was found that marginal farm size was inversely related to tenure insecurity while tenure insecurity correlate positively with value of farm land and not farm size. Individual ownership and documentation of land significantly reduced the probability of households losing uncultivated lands. Individual land ownership increased both the probability of investing and level of investments made in land improvement and irrigation probably due to increasing importance households place on land ownership. Two possible explanations for this finding are: First, that land markets and land relations have changed significantly over the last two decades with increasing money transaction and fixed agreements propelled by population growth and increasing value of land. Secondly, inclusion of irrigation investment as a long term investment in land raises the value of household investment and the time period required to reap the returns on the investments. Households take land ownership and duration of tenancy into consideration if the resource implications of land investments are relatively huge and the time dimension for harvesting returns to investments is relatively long.
Resumo:
We present a new method to select features for a face detection system using Support Vector Machines (SVMs). In the first step we reduce the dimensionality of the input space by projecting the data into a subset of eigenvectors. The dimension of the subset is determined by a classification criterion based on minimizing a bound on the expected error probability of an SVM. In the second step we select features from the SVM feature space by removing those that have low contributions to the decision function of the SVM.
Resumo:
Compositional data analysis motivated the introduction of a complete Euclidean structure in the simplex of D parts. This was based on the early work of J. Aitchison (1986) and completed recently when Aitchinson distance in the simplex was associated with an inner product and orthonormal bases were identified (Aitchison and others, 2002; Egozcue and others, 2003). A partition of the support of a random variable generates a composition by assigning the probability of each interval to a part of the composition. One can imagine that the partition can be refined and the probability density would represent a kind of continuous composition of probabilities in a simplex of infinitely many parts. This intuitive idea would lead to a Hilbert-space of probability densities by generalizing the Aitchison geometry for compositions in the simplex into the set probability densities
Resumo:
In this paper a novel methodology aimed at minimizing the probability of network failure and the failure impact (in terms of QoS degradation) while optimizing the resource consumption is introduced. A detailed study of MPLS recovery techniques and their GMPLS extensions are also presented. In this scenario, some features for reducing the failure impact and offering minimum failure probabilities at the same time are also analyzed. Novel two-step routing algorithms using this methodology are proposed. Results show that these methods offer high protection levels with optimal resource consumption
Resumo:
In this paper, we consider the ATM networks in which the virtual path concept is implemented. The question of how to multiplex two or more diverse traffic classes while providing different quality of service requirements is a very complicated open problem. Two distinct options are available: integration and segregation. In an integration approach all the traffic from different connections are multiplexed onto one VP. This implies that the most restrictive QOS requirements must be applied to all services. Therefore, link utilization will be decreased because unnecessarily stringent QOS is provided to all connections. With the segregation approach the problem can be much simplified if different types of traffic are separated by assigning a VP with dedicated resources (buffers and links). Therefore, resources may not be efficiently utilized because no sharing of bandwidth can take place across the VP. The probability that the bandwidth required by the accepted connections exceeds the capacity of the link is evaluated with the probability of congestion (PC). Since the PC can be expressed as the CLP, we shall simply carry out bandwidth allocation using the PC. We first focus on the influence of some parameters (CLP, bit rate and burstiness) on the capacity required by a VP supporting a single traffic class using the new convolution approach. Numerical results are presented both to compare the required capacity and to observe which conditions under each approach are preferred