12 resultados para Network performance
em Cochin University of Science
Resumo:
The service quality of any sector has two major aspects namely technical and functional. Technical quality can be attained by maintaining technical specification as decided by the organization. Functional quality refers to the manner which service is delivered to customer which can be assessed by the customer feed backs. A field survey was conducted based on the management tool SERVQUAL, by designing 28 constructs under 7 dimensions of service quality. Stratified sampling techniques were used to get 336 valid responses and the gap scores of expectations and perceptions are analyzed using statistical techniques to identify the weakest dimension. To assess the technical aspects of availability six months live outage data of base transceiver were collected. The statistical and exploratory techniques were used to model the network performance. The failure patterns have been modeled in competing risk models and probability distribution of service outage and restorations were parameterized. Since the availability of network is a function of the reliability and maintainability of the network elements, any service provider who wishes to keep up their service level agreements on availability should be aware of the variability of these elements and its effects on interactions. The availability variations were studied by designing a discrete time event simulation model with probabilistic input parameters. The probabilistic distribution parameters arrived from live data analysis was used to design experiments to define the availability domain of the network under consideration. The availability domain can be used as a reference for planning and implementing maintenance activities. A new metric is proposed which incorporates a consistency index along with key service parameters that can be used to compare the performance of different service providers. The developed tool can be used for reliability analysis of mobile communication systems and assumes greater significance in the wake of mobile portability facility. It is also possible to have a relative measure of the effectiveness of different service providers.
Resumo:
In the present study made an attempt to analyse the structure, performance and growth of women industrial cooperatives in kannur district, Kerala. The study encompasses all women industrial cooperatives registered at the district industries center, kannur and that currently exist. The women industrial cooperatives are classified into two ie; group with network and another group without network. In Kannur there are 54 units working as women industrial cooperatives. One of the main problems the women cooperatives face is the lack of working capital followed marketing problem. The competition between cooperatives and private traders is very high. The variables examined to analyse the performance of women industrial cooperatives in Kannur showed that there exists inter unit differences in almost all the variables. The financial structure structure shows that the short term liquidity of women cooperatives in Kannur favour more the units which have political networks; but the long term financial coverage is seen to be highly geared in this group, not because of a decline is net worth but due to highly proportionate increase in financial liabilities in the form of borrowings. The encouragement given by the government through financial stake and other incentives has been the major factor in the formation and growth of women cooperatives. As a result both productivity and efficiency improves in the cooperatives. In short the present study helped to capture the impact, role and dynamics of networking in general and socio political network in particular in relation to intra and inter unit differences on the structure, growth and performance of women industrial cooperatives societies in Kannur district
Resumo:
Neural Network has emerged as the topic of the day. The spectrum of its application is as wide as from ECG noise filtering to seismic data analysis and from elementary particle detection to electronic music composition. The focal point of the proposed work is an application of a massively parallel connectionist model network for detection of a sonar target. This task is segmented into: (i) generation of training patterns from sea noise that contains radiated noise of a target, for teaching the network;(ii) selection of suitable network topology and learning algorithm and (iii) training of the network and its subsequent testing where the network detects, in unknown patterns applied to it, the presence of the features it has already learned in. A three-layer perceptron using backpropagation learning is initially subjected to a recursive training with example patterns (derived from sea ambient noise with and without the radiated noise of a target). On every presentation, the error in the output of the network is propagated back and the weights and the bias associated with each neuron in the network are modified in proportion to this error measure. During this iterative process, the network converges and extracts the target features which get encoded into its generalized weights and biases.In every unknown pattern that the converged network subsequently confronts with, it searches for the features already learned and outputs an indication for their presence or absence. This capability for target detection is exhibited by the response of the network to various test patterns presented to it.Three network topologies are tried with two variants of backpropagation learning and a grading of the performance of each combination is subsequently made.
Resumo:
Internet today has become a vital part of day to day life, owing to the revolutionary changes it has brought about in various fields. Dependence on the Internet as an information highway and knowledge bank is exponentially increasing so that a going back is beyond imagination. Transfer of critical information is also being carried out through the Internet. This widespread use of the Internet coupled with the tremendous growth in e-commerce and m-commerce has created a vital need for infonnation security.Internet has also become an active field of crackers and intruders. The whole development in this area can become null and void if fool-proof security of the data is not ensured without a chance of being adulterated. It is, hence a challenge before the professional community to develop systems to ensure security of the data sent through the Internet.Stream ciphers, hash functions and message authentication codes play vital roles in providing security services like confidentiality, integrity and authentication of the data sent through the Internet. There are several ·such popular and dependable techniques, which have been in use widely, for quite a long time. This long term exposure makes them vulnerable to successful or near successful attempts for attacks. Hence it is the need of the hour to develop new algorithms with better security.Hence studies were conducted on various types of algorithms being used in this area. Focus was given to identify the properties imparting security at this stage. By making use of a perception derived from these studies, new algorithms were designed. Performances of these algorithms were then studied followed by necessary modifications to yield an improved system consisting of a new stream cipher algorithm MAJE4, a new hash code JERIM- 320 and a new message authentication code MACJER-320. Detailed analysis and comparison with the existing popular schemes were also carried out to establish the security levels.The Secure Socket Layer (SSL) I Transport Layer Security (TLS) protocol is one of the most widely used security protocols in Internet. The cryptographic algorithms RC4 and HMAC have been in use for achieving security services like confidentiality and authentication in the SSL I TLS. But recent attacks on RC4 and HMAC have raised questions about the reliability of these algorithms. Hence MAJE4 and MACJER-320 have been proposed as substitutes for them. Detailed studies on the performance of these new algorithms were carried out; it has been observed that they are dependable alternatives.
Resumo:
The proliferation of wireless sensor networks in a large spectrum of applications had been spurered by the rapid advances in MEMS(micro-electro mechanical systems )based sensor technology coupled with low power,Low cost digital signal processors and radio frequency circuits.A sensor network is composed of thousands of low cost and portable devices bearing large sensing computing and wireless communication capabilities. This large collection of tiny sensors can form a robust data computing and communication distributed system for automated information gathering and distributed sensing.The main attractive feature is that such a sensor network can be deployed in remote areas.Since the sensor node is battery powered,all the sensor nodes should collaborate together to form a fault tolerant network so as toprovide an efficient utilization of precious network resources like wireless channel,memory and battery capacity.The most crucial constraint is the energy consumption which has become the prime challenge for the design of long lived sensor nodes.
Resumo:
Mobile Ad-hoc Networks (MANETS) consists of a collection of mobile nodes without having a central coordination. In MANET, node mobility and dynamic topology play an important role in the performance. MANET provide a solution for network connection at anywhere and at any time. The major features of MANET are quick set up, self organization and self maintenance. Routing is a major challenge in MANET due to it’s dynamic topology and high mobility. Several routing algorithms have been developed for routing. This paper studies the AODV protocol and how AODV is performed under multiple connections in the network. Several issues have been identified. The bandwidth is recognized as the prominent factor reducing the performance of the network. This paper gives an improvement of normal AODV for simultaneous multiple connections under the consideration of bandwidth of node.
Effectiveness Of Feature Detection Operators On The Performance Of Iris Biometric Recognition System
Resumo:
Iris Recognition is a highly efficient biometric identification system with great possibilities for future in the security systems area.Its robustness and unobtrusiveness, as opposed tomost of the currently deployed systems, make it a good candidate to replace most of thesecurity systems around. By making use of the distinctiveness of iris patterns, iris recognition systems obtain a unique mapping for each person. Identification of this person is possible by applying appropriate matching algorithm.In this paper, Daugman’s Rubber Sheet model is employed for irisnormalization and unwrapping, descriptive statistical analysis of different feature detection operators is performed, features extracted is encoded using Haar wavelets and for classification hammingdistance as a matching algorithm is used. The system was tested on the UBIRIS database. The edge detection algorithm, Canny, is found to be the best one to extract most of the iris texture. The success rate of feature detection using canny is 81%, False Accept Rate is 9% and False Reject Rate is 10%.
Resumo:
In Wireless Sensor Networks (WSN), neglecting the effects of varying channel quality can lead to an unnecessary wastage of precious battery resources and in turn can result in the rapid depletion of sensor energy and the partitioning of the network. Fairness is a critical issue when accessing a shared wireless channel and fair scheduling must be employed to provide the proper flow of information in a WSN. In this paper, we develop a channel adaptive MAC protocol with a traffic-aware dynamic power management algorithm for efficient packet scheduling and queuing in a sensor network, with time varying characteristics of the wireless channel also taken into consideration. The proposed protocol calculates a combined weight value based on the channel state and link quality. Then transmission is allowed only for those nodes with weights greater than a minimum quality threshold and nodes attempting to access the wireless medium with a low weight will be allowed to transmit only when their weight becomes high. This results in many poor quality nodes being deprived of transmission for a considerable amount of time. To avoid the buffer overflow and to achieve fairness for the poor quality nodes, we design a Load prediction algorithm. We also design a traffic aware dynamic power management scheme to minimize the energy consumption by continuously turning off the radio interface of all the unnecessary nodes that are not included in the routing path. By Simulation results, we show that our proposed protocol achieves a higher throughput and fairness besides reducing the delay
Resumo:
MicroRNAs are short non-coding RNAs that can regulate gene expression during various crucial cell processes such as differentiation, proliferation and apoptosis. Changes in expression profiles of miRNA play an important role in the development of many cancers, including CRC. Therefore, the identification of cancer related miRNAs and their target genes are important for cancer biology research. In this paper, we applied TSK-type recurrent neural fuzzy network (TRNFN) to infer miRNA–mRNA association network from paired miRNA, mRNA expression profiles of CRC patients. We demonstrated that the method we proposed achieved good performance in recovering known experimentally verified miRNA–mRNA associations. Moreover, our approach proved successful in identifying 17 validated cancer miRNAs which are directly involved in the CRC related pathways. Targeting such miRNAs may help not only to prevent the recurrence of disease but also to control the growth of advanced metastatic tumors. Our regulatory modules provide valuable insights into the pathogenesis of cancer
Resumo:
While channel coding is a standard method of improving a system’s energy efficiency in digital communications, its practice does not extend to high-speed links. Increasing demands in network speeds are placing a large burden on the energy efficiency of high-speed links and render the benefit of channel coding for these systems a timely subject. The low error rates of interest and the presence of residual intersymbol interference (ISI) caused by hardware constraints impede the analysis and simulation of coded high-speed links. Focusing on the residual ISI and combined noise as the dominant error mechanisms, this paper analyses error correlation through concepts of error region, channel signature, and correlation distance. This framework provides a deeper insight into joint error behaviours in high-speed links, extends the range of statistical simulation for coded high-speed links, and provides a case against the use of biased Monte Carlo methods in this setting
Resumo:
The country has witnessed tremendous increase in the vehicle population and increased axle loading pattern during the last decade, leaving its road network overstressed and leading to premature failure. The type of deterioration present in the pavement should be considered for determining whether it has a functional or structural deficiency, so that appropriate overlay type and design can be developed. Structural failure arises from the conditions that adversely affect the load carrying capability of the pavement structure. Inadequate thickness, cracking, distortion and disintegration cause structural deficiency. Functional deficiency arises when the pavement does not provide a smooth riding surface and comfort to the user. This can be due to poor surface friction and texture, hydro planning and splash from wheel path, rutting and excess surface distortion such as potholes, corrugation, faulting, blow up, settlement, heaves etc. Functional condition determines the level of service provided by the facility to its users at a particular time and also the Vehicle Operating Costs (VOC), thus influencing the national economy. Prediction of the pavement deterioration is helpful to assess the remaining effective service life (RSL) of the pavement structure on the basis of reduction in performance levels, and apply various alternative designs and rehabilitation strategies with a long range funding requirement for pavement preservation. In addition, they can predict the impact of treatment on the condition of the sections. The infrastructure prediction models can thus be classified into four groups, namely primary response models, structural performance models, functional performance models and damage models. The factors affecting the deterioration of the roads are very complex in nature and vary from place to place. Hence there is need to have a thorough study of the deterioration mechanism under varied climatic zones and soil conditions before arriving at a definite strategy of road improvement. Realizing the need for a detailed study involving all types of roads in the state with varying traffic and soil conditions, the present study has been attempted. This study attempts to identify the parameters that affect the performance of roads and to develop performance models suitable to Kerala conditions. A critical review of the various factors that contribute to the pavement performance has been presented based on the data collected from selected road stretches and also from five corporations of Kerala. These roads represent the urban conditions as well as National Highways, State Highways and Major District Roads in the sub urban and rural conditions. This research work is a pursuit towards a study of the road condition of Kerala with respect to varying soil, traffic and climatic conditions, periodic performance evaluation of selected roads of representative types and development of distress prediction models for roads of Kerala. In order to achieve this aim, the study is focused into 2 parts. The first part deals with the study of the pavement condition and subgrade soil properties of urban roads distributed in 5 Corporations of Kerala; namely Thiruvananthapuram, Kollam, Kochi, Thrissur and Kozhikode. From selected 44 roads, 68 homogeneous sections were studied. The data collected on the functional and structural condition of the surface include pavement distress in terms of cracks, potholes, rutting, raveling and pothole patching. The structural strength of the pavement was measured as rebound deflection using Benkelman Beam deflection studies. In order to collect the details of the pavement layers and find out the subgrade soil properties, trial pits were dug and the in-situ field density was found using the Sand Replacement Method. Laboratory investigations were carried out to find out the subgrade soil properties, soil classification, Atterberg limits, Optimum Moisture Content, Field Moisture Content and 4 days soaked CBR. The relative compaction in the field was also determined. The traffic details were also collected by conducting traffic volume count survey and axle load survey. From the data thus collected, the strength of the pavement was calculated which is a function of the layer coefficient and thickness and is represented as Structural Number (SN). This was further related to the CBR value of the soil and the Modified Structural Number (MSN) was found out. The condition of the pavement was represented in terms of the Pavement Condition Index (PCI) which is a function of the distress of the surface at the time of the investigation and calculated in the present study using deduct value method developed by U S Army Corps of Engineers. The influence of subgrade soil type and pavement condition on the relationship between MSN and rebound deflection was studied using appropriate plots for predominant types of soil and for classified value of Pavement Condition Index. The relationship will be helpful for practicing engineers to design the overlay thickness required for the pavement, without conducting the BBD test. Regression analysis using SPSS was done with various trials to find out the best fit relationship between the rebound deflection and CBR, and other soil properties for Gravel, Sand, Silt & Clay fractions. The second part of the study deals with periodic performance evaluation of selected road stretches representing National Highway (NH), State Highway (SH) and Major District Road (MDR), located in different geographical conditions and with varying traffic. 8 road sections divided into 15 homogeneous sections were selected for the study and 6 sets of continuous periodic data were collected. The periodic data collected include the functional and structural condition in terms of distress (pothole, pothole patch, cracks, rutting and raveling), skid resistance using a portable skid resistance pendulum, surface unevenness using Bump Integrator, texture depth using sand patch method and rebound deflection using Benkelman Beam. Baseline data of the study stretches were collected as one time data. Pavement history was obtained as secondary data. Pavement drainage characteristics were collected in terms of camber or cross slope using camber board (slope meter) for the carriage way and shoulders, availability of longitudinal side drain, presence of valley, terrain condition, soil moisture content, water table data, High Flood Level, rainfall data, land use and cross slope of the adjoining land. These data were used for finding out the drainage condition of the study stretches. Traffic studies were conducted, including classified volume count and axle load studies. From the field data thus collected, the progression of each parameter was plotted for all the study roads; and validated for their accuracy. Structural Number (SN) and Modified Structural Number (MSN) were calculated for the study stretches. Progression of the deflection, distress, unevenness, skid resistance and macro texture of the study roads were evaluated. Since the deterioration of the pavement is a complex phenomena contributed by all the above factors, pavement deterioration models were developed as non linear regression models, using SPSS with the periodic data collected for all the above road stretches. General models were developed for cracking progression, raveling progression, pothole progression and roughness progression using SPSS. A model for construction quality was also developed. Calibration of HDM–4 pavement deterioration models for local conditions was done using the data for Cracking, Raveling, Pothole and Roughness. Validation was done using the data collected in 2013. The application of HDM-4 to compare different maintenance and rehabilitation options were studied considering the deterioration parameters like cracking, pothole and raveling. The alternatives considered for analysis were base alternative with crack sealing and patching, overlay with 40 mm BC using ordinary bitumen, overlay with 40 mm BC using Natural Rubber Modified Bitumen and an overlay of Ultra Thin White Topping. Economic analysis of these options was done considering the Life Cycle Cost (LCC). The average speed that can be obtained by applying these options were also compared. The results were in favour of Ultra Thin White Topping over flexible pavements. Hence, Design Charts were also plotted for estimation of maximum wheel load stresses for different slab thickness under different soil conditions. The design charts showed the maximum stress for a particular slab thickness and different soil conditions incorporating different k values. These charts can be handy for a design engineer. Fuzzy rule based models developed for site specific conditions were compared with regression models developed using SPSS. The Riding Comfort Index (RCI) was calculated and correlated with unevenness to develop a relationship. Relationships were developed between Skid Number and Macro Texture of the pavement. The effort made through this research work will be helpful to highway engineers in understanding the behaviour of flexible pavements in Kerala conditions and for arriving at suitable maintenance and rehabilitation strategies. Key Words: Flexible Pavements – Performance Evaluation – Urban Roads – NH – SH and other roads – Performance Models – Deflection – Riding Comfort Index – Skid Resistance – Texture Depth – Unevenness – Ultra Thin White Topping
Resumo:
Coordination among supply chain members is essential for better supply chain performance. An effective method to improve supply chain coordination is to implement proper coordination mechanisms. The primary objective of this research is to study the performance of a multi-level supply chain while using selected coordination mechanisms separately, and in combination, under lost sale and back order cases. The coordination mechanisms used in this study are price discount, delay in payment and different types of information sharing. Mathematical modelling and simulation modelling are used in this study to analyse the performance of the supply chain using these mechanisms. Initially, a three level supply chain consisting of a supplier, a manufacturer and a retailer has been used to study the combined effect of price discount and delay in payment on the performance (profit) of supply chain using mathematical modelling. This study showed that implementation of individual mechanisms improves the performance of the supply chain compared to ‘no coordination’. When more than one mechanism is used in combination, performance in most cases further improved. The three level supply chain considered in mathematical modelling was then extended to a three level network supply chain consisting of a four retailers, two wholesalers, and a manufacturer with an infinite part supplier. The performance of this network supply chain was analysed under both lost sale and backorder cases using simulation modelling with the same mechanisms: ‘price discount and delay in payment’ used in mathematical modelling. This study also showed that the performance of the supply chain is significantly improved while using combination of mechanisms as obtained earlier. In this study, it is found that the effect (increase in profit) of ‘delay in payment’ and combination of ‘price discount’ & ‘delay in payment’ on SC profit is relatively high in the case of lost sale. Sensitivity analysis showed that order cost of the retailer plays a major role in the performance of the supply chain as it decides the order quantity of the other players in the supply chain in this study. Sensitivity analysis also showed that there is a proportional change in supply chain profit with change in rate of return of any player. In the case of price discount, elasticity of demand is an important factor to improve the performance of the supply chain. It is also found that the change in permissible delay in payment given by the seller to the buyer affects the SC profit more than the delay in payment availed by the buyer from the seller. In continuation of the above, a study on the performance of a four level supply chain consisting of a manufacturer, a wholesaler, a distributor and a retailer with ‘information sharing’ as coordination mechanism, under lost sale and backorder cases, using a simulation game with live players has been conducted. In this study, best performance is obtained in the case of sharing ‘demand and supply chain performance’ compared to other seven types of information sharing including traditional method. This study also revealed that effect of information sharing on supply chain performance is relatively high in the case of lost sale than backorder. The in depth analysis in this part of the study showed that lack of information sharing need not always be resulting in bullwhip effect. Instead of bullwhip effect, lack of information sharing produced a huge hike in lost sales cost or backorder cost in this study which is also not favorable for the supply chain. Overall analysis provided the extent of improvement in supply chain performance under different cases. Sensitivity analysis revealed useful insights about the decision variables of supply chain and it will be useful for the supply chain management practitioners to take appropriate decisions.