891 resultados para Voice over Internet protocol
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
This paper presents a study based on bibliographic research on LTE technology, chosen for the fourth generation of mobile phones, and the current status of implementation of 4G network in Brazil. The change in user behavior, which now uses data over the voice services, requires transmission networks to be increasingly robust and fast to enable the viewing of videos and use of other platforms that require internet connection. The retrospective of the development of mobile technologies, from 1G up to 4G that is currently used, shows the long road until it came to appliances and how the phone is used nowadays. Finally, the popularity of smartphones and hence the growing number of people with access to 4G networks, demanded new researchs for the development of future generations technologies in order to achieve the demand for speed enabling significant changes in user experience
Resumo:
Computer and telecommunication networks are changing the world dramatically and will continue to do so in the foreseeable future. The Internet, primarily based on packet switches, provides very flexible data services such as e-mail and access to the World Wide Web. The Internet is a variable-delay, variable- bandwidth network that provides no guarantee on quality of service (QoS) in its initial phase. New services are being added to the pure data delivery framework of yesterday. Such high demands on capacity could lead to a “bandwidth crunch” at the core wide-area network, resulting in degradation of service quality. Fortunately, technological innovations have emerged which can provide relief to the end user to overcome the Internet’s well-known delay and bandwidth limitations. At the physical layer, a major overhaul of existing networks has been envisaged from electronic media (e.g., twisted pair and cable) to optical fibers - in wide-area, metropolitan-area, and even local-area settings. In order to exploit the immense bandwidth potential of optical fiber, interesting multiplexing techniques have been developed over the years.
Resumo:
In this paper, we propose a Layered Clustering Hierarchy (LCH) communication protocol for Wireless Sensor Networks (WSNs). The design of LCH has two goals: scalability and energy-efficiency. In LCH, the sensor nodes are organized as a layered clustering structure. Each layer runs a distributed clustering protocol. By randomizing the rotation of cluster heads in each layer, the energy load is distributed evenly across sensors in the network. Our simulations show that LCH is effective in densely deployed sensor networks. On average, 70% of live sensor nodes are involved directly in the clustering communication hierarchy. Moreover, the simulations also show that the energy load and dead nodes are distributed evenly over the network. As studies prove that the performance of LCH depends mainly on the distributed clustering protocol, the location of cluster heads and cluster size are two critical factors in the design of LCH.
Resumo:
The security of the two party Diffie-Hellman key exchange protocol is currently based on the discrete logarithm problem (DLP). However, it can also be built upon the elliptic curve discrete logarithm problem (ECDLP). Most proposed secure group communication schemes employ the DLP-based Diffie-Hellman protocol. This paper proposes the ECDLP-based Diffie-Hellman protocols for secure group communication and evaluates their performance on wireless ad hoc networks. The proposed schemes are compared at the same security level with DLP-based group protocols under different channel conditions. Our experiments and analysis show that the Tree-based Group Elliptic Curve Diffie-Hellman (TGECDH) protocol is the best in overall performance for secure group communication among the four schemes discussed in the paper. Low communication overhead, relatively low computation load and short packets are the main reasons for the good performance of the TGECDH protocol.
Resumo:
This paper presents a study based on bibliographic research on LTE technology, chosen for the fourth generation of mobile phones, and the current status of implementation of 4G network in Brazil. The change in user behavior, which now uses data over the voice services, requires transmission networks to be increasingly robust and fast to enable the viewing of videos and use of other platforms that require internet connection. The retrospective of the development of mobile technologies, from 1G up to 4G that is currently used, shows the long road until it came to appliances and how the phone is used nowadays. Finally, the popularity of smartphones and hence the growing number of people with access to 4G networks, demanded new researchs for the development of future generations technologies in order to achieve the demand for speed enabling significant changes in user experience
Resumo:
We report new archeointensity data obtained from the analyses of baked clay elements (architectural and kiln brick fragments) sampled in Southeast Brazil and historically and/or archeologically dated between the end of the XVIth century and the beginning of the XXth century AD. The results were determined using the classical Thellier and Thellier protocol as modified by Coe, including partial thermoremanent magnetization (pTRM) and pTRM-tail checks, and the Triaxe protocol, which involves continuous high-temperature magnetization measurements. In both protocols, TRM anisotropy and cooling rate TRM dependence effects were taken into account for intensity determinations which were successfully performed for 150 specimens from 43 fragments, with a good agreement between intensity results obtained from the two procedures. Nine site-mean intensity values were derived from three to eight fragments and defined with standard deviations of less than 8%. The site-mean values vary from similar to 25 mu T to similar to 42 mu T and describe in Southeast Brazil a continuous decreasing trend by similar to 5 mu T per century between similar to 1600 AD and similar to 1900 AD. Their comparison with recent archeointensity results obtained from Northeast Brazil and reduced at a same latitude shows that: (1) the geocentric axial dipole approximation is not valid between these southeastern and northeastern regions of Brazil, whose latitudes differ by similar to 10 degrees, and (2) the available global geomagnetic field models (gufm1 models, their recalibrated versions and the CALSK3 models) are not sufficiently precise to reliably reproduce the non-dipole field effects which prevailed in Brazil for at least the 1600-1750 period. The large non-dipole contribution thus highlighted is most probably linked to the evolution of the South Atlantic Magnetic Anomaly (SAMA) during that period. Furthermore, although our dataset is limited, the Brazilian archeointensity data appear to support the view of a rather oscillatory behavior of the axial dipole moment during the past three centuries that would have been marked in particular by a moderate increase between the end of the XVIIIth century and the middle of the XIXth century followed by the well-known decrease from 1840 AD attested by direct measurements. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
This study presents the first archeointensity results from Northeast Brazil obtained from 14 groups of architectural brick fragments sampled in the city of Salvador, Bahia State (13 degrees S, 38.5 degrees W) and dated between the middle of the XVIth century and the beginning of the XIXth century. The dating is ascertained by historical documents complemented by archeological constraints, yielding in all cases age uncertainties of less than 50 years. Analyses were carried out using two experimental protocols: 1 the ""zero field-in field"" version of the classical Thellier and Thellier method as proposed by Coe (TT-ZI), including partial thermoremanent magnetization (pTRM) and pTRM-tail checks, and 2 the Triaxe procedure involving continuous high temperature magnetization measurements. Both TRM anisotropy and cooling rate effects were taken into account for the intensity determinations. The cooling rate effect was further explored for the TT-ZI protocol using three increasing slow cooling times (5 h, 10 h and 25 h) between 450 C and room temperature. Following archeological constraints, the slowest cooling time was retained in our study, yielding decreases of the raw intensity values by 4% to 14%. For each fragment, a mean intensity was computed and retained only when the data obtained from all specimens (between 2 and 6) satisfied a coherence test at similar to 5%. A total of 57 fragments (183 specimens) was considered for the computations of site-mean intensity values, with derived standard deviations of less than 8% of the corresponding means. When separately computed using the two experimental techniques, the site-mean intensity values always agree to within 5%. A good consistency is observed between intensity values of similar or close ages, which strengthen their reliability. Our data principally show a significant and continuous decrease in geomagnetic field intensity in Northeast Brazil between the first half of the XVIIth century and the XXth century. One result dated to the second half of the XVIth century further suggests that the geomagnetic field intensity reached a maximum around 1600 AD. This evolution is in good agreement with that expected in the city of Salvador from the available global geomagnetic field models. However, the accuracy of these models appears less well constrained between similar to 1550 AD and similar to 1650 AD. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This paper discusses some aspects related to Wireless Sensor Networks over the IEEE 802.15.4 standard, and proposes, for the very first time, a mesh network topology with geographic routing integrated to the open Freescale protocol (SMAC - Simple Medium Access Control). For this is proposed the SMAC routing protocol. Before this work the SMAC protocol was suitable to perform one hop communications only. However, with the developed mechanisms, it is possible to use multi-hop communication. Performance results from the implemented protocol are presented and analyzed in order to define important requirements for wireless sensor networks, such as robustness, self-healing property and low latency. (c) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The web services (WS) technology provides a comprehensive solution for representing, discovering, and invoking services in a wide variety of environments, including Service Oriented Architectures (SOA) and grid computing systems. At the core of WS technology lie a number of XML-based standards, such as the Simple Object Access Protocol (SOAP), that have successfully ensured WS extensibility, transparency, and interoperability. Nonetheless, there is an increasing demand to enhance WS performance, which is severely impaired by XML's verbosity. SOAP communications produce considerable network traffic, making them unfit for distributed, loosely coupled, and heterogeneous computing environments such as the open Internet. Also, they introduce higher latency and processing delays than other technologies, like Java RMI and CORBA. WS research has recently focused on SOAP performance enhancement. Many approaches build on the observation that SOAP message exchange usually involves highly similar messages (those created by the same implementation usually have the same structure, and those sent from a server to multiple clients tend to show similarities in structure and content). Similarity evaluation and differential encoding have thus emerged as SOAP performance enhancement techniques. The main idea is to identify the common parts of SOAP messages, to be processed only once, avoiding a large amount of overhead. Other approaches investigate nontraditional processor architectures, including micro-and macrolevel parallel processing solutions, so as to further increase the processing rates of SOAP/XML software toolkits. This survey paper provides a concise, yet comprehensive review of the research efforts aimed at SOAP performance enhancement. A unified view of the problem is provided, covering almost every phase of SOAP processing, ranging over message parsing, serialization, deserialization, compression, multicasting, security evaluation, and data/instruction-level processing.
Resumo:
Background/purpose: Gallstones and cholelithiasis are being increasingly diagnosed in children owing to the widespread use of ultrasonography. The treatment of choice is cholecystectomy, and routine intraoperative cholangiography is recommended to explore the common bile duct. The objectives of this study were to describe our experience with the management of gallstone disease in childhood over the last 18 years and to propose an algorithm to guide the approach to cholelithiasis in children based on clinical and ultrasonographic findings. Methods: The data for this study were obtained by reviewing the records of all patients with gallstone disease treated between January 1994 and October 2011. The patients were divided into the following 5 groups based on their symptoms: group 1, asymptomatic; group 2, nonbiliary obstructive symptoms; group 3, acute cholecystitis symptoms; group 4, a history of biliary obstructive symptoms that were completely resolved by the time of surgery; and group 5, ongoing biliary obstructive symptoms. Patients were treated according to an algorithm based on their clinical, ultrasonographic, and endoscopic retrograde cholangiopancreatography (ERCP) findings. Results: A total of 223 patients were diagnosed with cholelithiasis, and comorbidities were present in 177 patients (79.3%). The most common comorbidities were hemolytic disorders in 139 patients (62.3%) and previous bariatric surgery in 16 (7.1%). Although symptoms were present in 134 patients (60.0%), cholecystectomy was performed for all patients with cholelithiasis, even if they were asymptomatic; the surgery was laparoscopic in 204 patients and open in 19. Fifty-six patients (25.1%) presented with complications as the first sign of cholelithiasis (eg, pancreatitis, choledocolithiasis, or acute calculous cholecystitis). Intraoperative cholangiography was indicated in 15 children, and it was positive in only 1 (0.4%) for whom ERCP was necessary to extract the stone after a laparoscopic cholecystectomy (LC). Preoperative ERCP was performed in 11 patients to extract the stones, and a hepaticojejunostomy was indicated in 2 patients. There were no injuries to the hepatic artery or common bile duct in our series. Conclusions: Based on our experience, we can propose an algorithm to guide the approach to cholelithiasis in the pediatric population. The final conclusion is that LC results in limited postoperative complications in children with gallstones. When a diagnosis of choledocolithiasis or dilation of the choledocus is made, ERCP is necessary if obstructive symptoms persist either before or after an LC. Intraoperative cholangiography and laparoscopic common bile duct exploration are not mandatory. Published by Elsevier Inc.
Resumo:
There is a wide range of video services over complex transmission networks, and in some cases end users fail to receive an acceptable quality level. In this paper, the different factors that degrade users' quality of experience (QoE) in video streaming service that use TCP as transmission protocol are studied. In this specific service, impairment factors are: number of pauses, their duration and temporal location. In order to measure the effect that each temporal segment has in the overall video quality, subjective tests. Because current subjective test methodologies are not adequate to assess video streaming over TCP, some recommendations are provided here. At the application layer, a customized player is used to evaluate the behavior of player buffer, and consequently, the end user QoE. Video subjective test results demonstrate that there is a close correlation between application parameters and subjective scores. Based on this fact, a new metrics named VsQM is defined, which considers the importance of temporal location of pauses to assess the user QoE of video streaming service. A useful application scenario is also presented, in which the metrics proposed herein is used to improve video services(1).
Resumo:
Within-site variability in species detectability is a problem common to many biodiversity assessments and can strongly bias the results. Such variability can be caused by many factors, including simple counting inaccuracies, which can be solved by increasing sample size, or by temporal changes in species behavior, meaning that the way the temporal sampling protocol is designed is also very important. Here we use the example of mist-netted tropical birds to determine how design decisions in the temporal sampling protocol can alter the data collected and how these changes might affect the detection of ecological patterns, such as the species-area relationship (SAR). Using data from almost 3400 birds captured from 21,000 net-hours at 31 sites in the Brazilian Atlantic Forest, we found that the magnitude of ecological trends remained fairly stable, but the probability of detecting statistically significant ecological patterns varied depending on sampling effort, time of day and season in which sampling was conducted. For example, more species were detected in the wet season, but the SAR was strongest in the dry season. We found that the temporal distribution of sampling effort was more important than its total amount, discovering that similar ecological results could have been obtained with one-third of the total effort, as long as each site had been equally sampled over 2 yr. We discuss that projects with the same sampling effort and spatial design, but with different temporal sampling protocol are likely to report different ecological patterns, which may ultimately lead to inappropriate conservation strategies.
Resumo:
Purpose. To use a randomized design to evaluate the effectiveness of voice training programs for telemarketers via multidimensional analysis. Methods. Forty-eight telemarketers were randomly assigned to two groups: voice training group (n = 14) who underwent training over an 8-week period and a nontraining control group (n = 34). Before and after training, recordings of the sustained vowel /epsilon/ and connected were collected for acoustic and perceptual analyses. Results. Based on pre- and posttraining comparisons, the voice training group presented with a significant reduction in percent jitter (P = 0.044). No other significant differences were observed, and inter-rater reliability varied from poor to fair. Conclusions. These findings suggest that voice training improved a single acoustic dimension, but do not change perceptual dimension of telemarketers' voices.
Resumo:
Objective: Parameters to distinguish normal from deviant voices in early childhood have not been established. The current study sought to auditorily and acoustically characterize voices of children, and to study the relationship between vocal behavior reported by teachers and the presence of vocal aberrations. Methods: One hundred children between four and 6 years and 11 months, who attended early childhood educational institutions, were included. The sample comprised 50 children with normal voices (NVG) and 50 with deviant voices (DVG) matched by gender and age. All participants were submitted to auditory and acoustic analysis of vocal quality and had their vocal behaviors assessed by teachers through a specific protocol. Results: DVG had a higher incidence of breathiness (p < 0.001) and roughness (p < 0.001), but not vocal strain (p = 0.546), which was similar in both groups. The average F-0 was lower in the DVG and a higher noise component was observed in this group as well. Regarding the protocol used "Aspects Related to Phonotrauma - Children's Protocol", higher means were observed for children from DVG in all analyzed aspects and also on the overall means (DVG = 2.15; NVG = 1.12, p < 0.001). In NVG, a higher incidence of vocal behavior without alterations or with discrete alterations was observed, whereas a higher incidence of moderate, severe or extreme alterations of vocal behavior was observed in DVG. Conclusions: Perceptual assessment of voice, vocal acoustic parameters (F-0, noise and GNE), and aspects related to vocal trauma and vocal behavior differentiated the groups of children with normal voice and deviant voice. (C) 2012 Elsevier Ireland Ltd. All rights reserved.