856 resultados para Load Balancing in Wireless LAN
Resumo:
When an asphalt mixture is subjected to a destructive compressive load, it experiences a sequence of three deformation stages, as follows: the (1) primary, (2) secondary, and (3) tertiary stages. Most literature research focuses on plastic deformation in the primary and secondary stages, such as prediction of the flow number, which is in fact the initiation of the tertiary stage. However, little research effort has been reported on the mechanistic modeling of the damage that occurs in the tertiary stage. The main objective of this paper is to provide a mechanistic characterizing method for the damage modeling of asphalt mixtures in the tertiary stage. The preliminary study conducted by the writers illustrates that deformation during the tertiary flow of the asphalt mixtures is principally caused by the formation and propagation of cracks, which was signaled by the increase of the phase angle in the tertiary phase. The strain caused by the growth of cracks is the viscofracture strain, which can be obtained by conducting the strain decomposition of the measured total strain in the destructive compressive test. The viscofracture strain is employed in the research reported in this paper to mechanistically characterize the time-dependent fracture (viscofracture) of asphalt mixtures in compression. By using the dissipated pseudostrain energy-balance principle, the damage density and true stress are determined and both are demonstrated to increase with load cycles in the tertiary stage. The increased true stress yields extra viscoplastic strain, which is the reason why the permanent deformation is accelerated by the occurrence of cracks. To characterize the evolution of the viscofracture in the asphalt mixtures in compression, a pseudo J-integral Paris' law in terms of damage density is proposed and the material constants in the Paris' law are determined, which can be employed to predict the fracture of asphalt mixtures in compression. © 2013 American Society of Civil Engineers.
Resumo:
One of the major drawbacks for mobile nodes in wireless networks is power management. Our goal is to evaluate the performance power control scheme to be used to reduce network congestion, improve quality of service and collision avoidance in vehicular network and road safety application. Some of the importance of power control (PC) are improving spatial reuse, and increasing network capacity in mobile wireless communications. In this simulation we have evaluated the performance of existing rate algorithms compared with context Aware Rate selection algorithm (ACARS) and also seen the performance of ACARS and how it can be applied to road safety, improve network control and power management. Result shows that ACARS is able to minimize the total transmit power in the presence of propagation processes and mobility of vehicles, by adapting to the fast varying channels conditions with the Path loss exponent values that was used for that environment which is shown in the network simulation parameter. Our results have shown that ACARS is a very robust algorithm which performs very well with the effect of propagation processes that is prone to every transmitted signal in mobile networks. © 2013 IEEE.
Resumo:
Since privatisation, maintenance of DNO LV feeder maximum demand information has gradually demised in some Utility Areas, and it is postulated that lack of knowledge about 11kV and LV electrical networks is resulting in a less economical and energy efficient Network as a whole. In an attempt to quantify the negative impact, this paper examines ten postulated new connection scenarios for a set of real LV load readings, in order to find the difference in design solutions when LV load readings were and were not known. The load profiles of the substations were examined in order to explore the utilisation profile. It was found that in 70% of the scenarios explored, significant cost differences were found. These cost differences varied by an average of 1000%, between schemes designed with and without load readings. Obviously, over designing a system and therefore operating more, underutilised transformers becomes less financially beneficial and less energy efficient. The paper concludes that new connection design is improved in terms of cost when carried out based on known LV load information and enhances the case for regular maximum feeder demand information and/or metering of LV feeders. © 2013 IEEE.
Resumo:
Communication through relay channels in wireless sensor networks can create diversity and consequently improve the robustness of data transmission for ubiquitous computing and networking applications. In this paper, we investigate the performances of relay channels in terms of diversity gain and throughput via both experimental research and theoretical analysis. Two relaying algorithms, dynamic relaying and fixed relaying, are utilised and tested to find out what the relay channels can contribute to system performances. The tests are based on a wireless relay sensor network comprising a source node, a destination node and a couple of relay nodes, and carried out in an indoor environment with rare movement of objects nearby. The tests confirm, in line with the analytical results, that more relay nodes lead to higher diversity gain in the network. The test results also show that the data throughput between the source node and the destination node is enhanced by the presence of the relay nodes. Energy consumption in association with the relaying strategy is also analysed. Copyright © 2009 John Wiley & Sons, Ltd.
Resumo:
The research presented in this dissertation is comprised of several parts which jointly attain the goal of Semantic Distributed Database Management with Applications to Internet Dissemination of Environmental Data. ^ Part of the research into more effective and efficient data management has been pursued through enhancements to the Semantic Binary Object-Oriented database (Sem-ODB) such as more effective load balancing techniques for the database engine, and the use of Sem-ODB as a tool for integrating structured and unstructured heterogeneous data sources. Another part of the research in data management has pursued methods for optimizing queries in distributed databases through the intelligent use of network bandwidth; this has applications in networks that provide varying levels of Quality of Service or throughput. ^ The application of the Semantic Binary database model as a tool for relational database modeling has also been pursued. This has resulted in database applications that are used by researchers at the Everglades National Park to store environmental data and to remotely-sensed imagery. ^ The areas of research described above have contributed to the creation TerraFly, which provides for the dissemination of geospatial data via the Internet. TerraFly research presented herein ranges from the development of TerraFly's back-end database and interfaces, through the features that are presented to the public (such as the ability to provide autopilot scripts and on-demand data about a point), to applications of TerraFly in the areas of hazard mitigation, recreation, and aviation. ^
Resumo:
The tragic events of September 11th ushered a new era of unprecedented challenges. Our nation has to be protected from the alarming threats of adversaries. These threats exploit the nation's critical infrastructures affecting all sectors of the economy. There is the need for pervasive monitoring and decentralized control of the nation's critical infrastructures. The communications needs of monitoring and control of critical infrastructures was traditionally catered for by wired communication systems. These technologies ensured high reliability and bandwidth but are however very expensive, inflexible and do not support mobility and pervasive monitoring. The communication protocols are Ethernet-based that used contention access protocols which results in high unsuccessful transmission and delay. An emerging class of wireless networks, named embedded wireless sensor and actuator networks has potential benefits for real-time monitoring and control of critical infrastructures. The use of embedded wireless networks for monitoring and control of critical infrastructures requires secure, reliable and timely exchange of information among controllers, distributed sensors and actuators. The exchange of information is over shared wireless media. However, wireless media is highly unpredictable due to path loss, shadow fading and ambient noise. Monitoring and control applications have stringent requirements on reliability, delay and security. The primary issue addressed in this dissertation is the impact of wireless media in harsh industrial environment on the reliable and timely delivery of critical data. In the first part of the dissertation, a combined networking and information theoretic approach was adopted to determine the transmit power required to maintain a minimum wireless channel capacity for reliable data transmission. The second part described a channel-aware scheduling scheme that ensured efficient utilization of the wireless link and guaranteed delay. Various analytical evaluations and simulations are used to evaluate and validate the feasibility of the methodologies and demonstrate that the protocols achieved reliable and real-time data delivery in wireless industrial networks.
Resumo:
Lateral load distribution factor is a key factor for designing and analyzing curved steel I-girder bridges. In this dissertation, the effects of various parameters on moment and shear distribution for curved steel I-girder bridges were studied using the Finite Element Method (FEM). The parameters considered in the study were: radius of curvature, girder spacing, overhang, span length, number of girders, ratio of girder stiffness to overall bridge stiffness, slab thickness, girder longitudinal stiffness, cross frame spacing, and girder torsional inertia. The variations of these parameters were based on the statistical analysis of the real bridge database, which was created by extracting data from existing or newly designed curved steel I-girder bridge plans collected all over the nation. A hypothetical bridge superstructure model that was made of all the mean values of the data was created and used for the parameter study. ^ The study showed that cross frame spacing and girder torsional inertia had negligible effects. Other parameters had been identified as key parameters. Regression analysis was conducted based on the FEM analysis results and simplified formulas for predicting positive moment, negative moment, and shear distribution factors were developed. Thirty-three real bridges were analyzed using FEM to verify the formulas. The ratio of the distribution factor obtained from the formula to the one obtained from the FEM analysis, which was referred to as the g-ratio, was examined. The results showed that the standard deviation of the g-ratios was within 0.04 to 0.06 and the mean value of the g-ratios was greater than unity by one standard deviation. This indicates that the formulas are conservative in most cases but not overly conservative. The final formulas are similar in format to the current American Association of State Highway and Transportation Officials (AASHTO) Load Resistance and Factor Design (LRFD) specifications. ^ The developed formulas were compared with other simplified methods. The outcomes showed that the proposed formulas had the most accurate results among all methods. ^ The formulas developed in this study will assist bridge engineers and researchers in predicting the actual live load distribution in horizontally curved steel I-girder bridges. ^
Resumo:
Unequaled improvements in processor and I/O speeds make many applications such as databases and operating systems to be increasingly I/O bound. Many schemes such as disk caching and disk mirroring have been proposed to address the problem. In this thesis we focus only on disk mirroring. In disk mirroring, a logical disk image is maintained on two physical disks allowing a single disk failure to be transparent to application programs. Although disk mirroring improves data availability and reliability, it has two major drawbacks. First, writes are expensive because both disks must be updated. Second, load balancing during failure mode operation is poor because all requests are serviced by the surviving disk. Distorted mirrors was proposed to address the write problem and interleaved declustering to address the load balancing problem. In this thesis we perform a comparative study of these two schemes under various operating modes. In addition we also study traditional mirroring to provide a common basis for comparison.
Resumo:
With the growing commercial importance of the Internet and the development of new real-time, connection-oriented services like IP-telephony and electronic commerce resilience is becoming a key issue in the design of TP-based networks. Two emerging technologies, which can accomplish the task of efficient information transfer, are Multiprotocol Label Switching (MPLS) and Differentiated Services. A main benefit of MPLS is the ability to introduce traffic-engineering concepts due to its connection-oriented characteristic. With MPLS it is possible to assign different paths for packets through the network. Differentiated services divides traffic into different classes and treat them differently, especially when there is a shortage of network resources. In this thesis, a framework was proposed to integrate the above two technologies and its performance in providing load balancing and improving QoS was evaluated. Simulation and analysis of this framework demonstrated that the combination of MPLS and Differentiated services is a powerful tool for QoS provisioning in IP networks.
Resumo:
The reverse time migration algorithm (RTM) has been widely used in the seismic industry to generate images of the underground and thus reduce the risk of oil and gas exploration. Its widespread use is due to its high quality in underground imaging. The RTM is also known for its high computational cost. Therefore, parallel computing techniques have been used in their implementations. In general, parallel approaches for RTM use a coarse granularity by distributing the processing of a subset of seismic shots among nodes of distributed systems. Parallel approaches with coarse granularity for RTM have been shown to be very efficient since the processing of each seismic shot can be performed independently. For this reason, RTM algorithm performance can be considerably improved by using a parallel approach with finer granularity for the processing assigned to each node. This work presents an efficient parallel algorithm for 3D reverse time migration with fine granularity using OpenMP. The propagation algorithm of 3D acoustic wave makes up much of the RTM. Different load balancing were analyzed in order to minimize possible losses parallel performance at this stage. The results served as a basis for the implementation of other phases RTM: backpropagation and imaging condition. The proposed algorithm was tested with synthetic data representing some of the possible underground structures. Metrics such as speedup and efficiency were used to analyze its parallel performance. The migrated sections show that the algorithm obtained satisfactory performance in identifying subsurface structures. As for the parallel performance, the analysis clearly demonstrate the scalability of the algorithm achieving a speedup of 22.46 for the propagation of the wave and 16.95 for the RTM, both with 24 threads.
Resumo:
The textile sector is one of the main contributors to the generation of industrial wastewaters due to the use of large volumes of water, which has a high organic load content. In these, it is observed to the presence of dyes, surfactants, starch, alcohols, acetic acid and other constituents, from the various processing steps of the textiles. Hence, the treatment of textile wastewater becomes fundamental before releasing it into water bodies, where they can cause disastrous physical-chemical changes for the environment. Surfactants are substances widely used in separation processes and their use for treating textile wastewaters was evaluated in this research by applying the cloud point extraction and the ionic flocculation. In the cloud point extraction was used as surfactant nonylphenol with 9.5 ethoxylation degree to remove reactive dye. The process evaluation was performed in terms of temperature, surfactant and dye concentrations. The dye removal reached 91%. The ionic flocculation occurs due to the presence of calcium, which reacts with anionic surfactant to form insoluble surfactants capable of attracting the organic matter by adsorption. In this work the ionic flocculation using base soap was applied to the treatment of synthetic wastewater containing dyes belonging to three classes: direct, reactive, and disperse. It was evaluated by the influence of the following parameters: surfactant and electrolyte concentrations, stirring speed, equilibrium time, temperature, and pH. The flocculation of the surfactant was carried out in two ways: forming the floc in the effluent itself and forming the floc before mixing it to the effluent. Removal of reactive and direct dye, when the floc is formed into textile effluent was 97% and 87%, respectively. In the case where the floc is formed prior to adding it to the effluent, the removal to direct and disperse dye reached 92% and 87%, respectively. These results show the efficience of the evaluated processes for dye removal from textile wastewaters.
Resumo:
This study proposes a solution responsible for scheduling data processing with variable demand in cloud environments. The system built check specific variables to the business context of a company incubated at Digital Metropole Institute of UFRN. Such a system generates an identification strategy machinery designs available in a cloud environment, focusing on processing performance, using data load balancing strategies and activities of parallelism in the software execution flow. The goal is to meet the seasonal demand within a standard time limit set by the company, controlling operating costs by using cloud services in the IaaS layer.
Resumo:
This paper reports on an investigation with first year undergraduate Product Design and Management students within a School of Engineering. The students at the time of this investigation had studied fundamental engineering science and mathematics for one semester. The students were given an open ended, ill formed problem which involved designing a simple bridge to cross a river. They were given a talk on problem solving and given a rubric to follow, if they chose to do so. They were not given any formulae or procedures needed in order to resolve the problem. In theory, they possessed the knowledge to ask the right questions in order to make assumptions but, in practice, it turned out they were unable to link their a priori knowledge to resolve this problem. They were able to solve simple beam problems when given closed questions. The results show they were unable to visualise a simple bridge as an augmented beam problem and ask pertinent questions and hence formulate appropriate assumptions in order to offer resolutions.
Resumo:
This paper is on the use and performance of M-path polyphase Infinite Impulse Response (IIR) filters for channelisation, conventionally where Finite Impulse Response (FIR) filters are preferred. This paper specifically focuses on the Discrete Fourier Transform (DFT) modulated filter banks, which are known to be an efficient choice for channelisation in communication systems. In this paper, the low-pass prototype filter for the DFT filter bank has been implemented using an M-path polyphase IIR filter and we show that the spikes present at the stopband can be avoided by making use of the guardbands between narrowband channels. It will be shown that the channelisation performance will not be affected when polyphase IIR filters are employed instead of their counterparts derived from FIR prototype filters. Detailed complexity and performance analysis of the proposed use will be given in this article.
Resumo:
Several landforms found in the fold-and-thrust belt area of Central Precordillera, Pre-Andes of Argentina, which were often associated with tectonic efforts, are in fact related to non-tectonic processes or gravitational superficial structures. These second-order structures, interpreted as gravitational collapse structures, have developed in the western flank of sierras de La Dehesa and Talacasto. These include rock-slides, rock falls, wrinkle folds, slip sheets and flaps, among others; which together constitute a monoclinal fold dipping between 30º and 60º to the west. Gravity collapse structures are parallel to the regional strike of the Sierra de la Dehesa and are placed in Ordovician limestones and dolomites. Their sloping towards the west, the presence of bed planes, fractures and joints; and the lithology (limestone interbedded with incompetent argillaceous banks) would have favored their occurrence. Movement of the detached structures has been controlled by lithology characteristics, as well as by bedding and joints. Detachment and initial transport of gravity collapse structures and rockslides in the western flank of the Sierra de la Dehesa were tightly controlled by three structural elements: 1) sliding surfaces developed on parallel bedded strata when dipping >30° in the slope direction; 2) Joint’s sets constitute lateral and transverse traction cracks which release extensional stresses and 3) Discontinuities fragmenting sliding surfaces. Some other factors that could be characterized as local (lithology, structure and topography) and as regional (high seismic activity and possibly wetter conditions during the postglacial period) were determining in favoring the steady loss of the western mountain side in the easternmost foothills of Central Precordillera.