610 resultados para Telecommunication.
Resumo:
This paper investigates the association between global community concerns about bribery activities and anti-bribery disclosure practices by two Chinese telecommunication companies operating internationally, namely China Mobile and ZTE. Based on content analysis of annual reports and global news media articles over a period of 16 years from 1995-2010, the findings suggest that the changes in the level of disclosures by the two major Chinese telecommunications companies were closely associated with the level of international concerns over bribery practices within the Chinese telecommunications industry. This finding indicates that the companies adopt anti-bribery disclosure practices in order to minimise the gap of trust (Social capital) between companies themselves and global stakeholders. In this paper we argue that, for domestic companies in China, culturally constructed social capital, such as guanxi, creates a level of trust between managers and their stakeholders, which obviates the need for managers to disclose anti-bribery performance information. However, for companies operating internationally, as social capital is inadequate to bridge the gap of trust between managers and global stakeholders, managers use disclosures of anti-bribery performance information as a way to minimise such a gap.
Resumo:
This study examines supervisors' emerging new role in a technical customer service and home customers division of a large Finnish telecommunications corporation. Data of the study comes from a second-generation knowledge management project, an intervention research, which was conducted for supervisors of the division. The study exemplifies how supervision work is transforming in high technology organization characterized with high speed of change in technologies, products, and in grass root work practices. The intervention research was conducted in the division during spring 2000. Primary analyzed data consists of six two-hour videorecorded intervention sessions. Unit of analysis has been collective learningactions. Researcher has first written conversation transcripts out of the video-recorded meetings and then analyzed this qualitative data using analytical schema based on collective learning actions. Supervisors' role is conceptualized as an actor of a collective and dynamic activity system, based on the ideas from cultural historical activity theory. On knowledge management researcher has takena second-generation knowledge management viewpoint, following ideas fromcultural historical activity theory and developmental work research. Second-generation knowledge management considers knowledge embedded and constructed in collective practices, such as innovation networks or communities of practice (supervisors' work community), which have the capacity to create new knowledge. Analysis and illustration of supervisors' emerging new role is conceptualized in this framework using methodological ideas derived from activity theory and developmental work research. Major findings of the study show that supervisors' emerging new role in a high technology telecommunication organization characterized with high speed of discontinuous change in technologies, products, and in grass-root practices cannot be defined or characterized using a normative management role/model. Their role is expanding two-dimensionally, (1) socially and (2) in new knowledge, and work practices. The expansion in organization and inter-organizational network (social expansion) causes pressures to manage a network of co-operation partners and subordinates. On the other hand, the faster speed of change in technological solutions, new products, and novel customer wants (expansion in knowledge) causes pressures for supervisors to innovate quickly new work practices to manage this change. Keywords: Activity theory, knowledge management, developmental work research, supervisors, high technology organizations, telecommunication organizations, second-generation knowledge management, competence laboratory, intervention research, learning actions.
Resumo:
There is an endless quest for new materials to meet the demands of advancing technology. Thus, we need new magnetic and metallic/semiconducting materials for spintronics, new low-loss dielectrics for telecommunication, new multi-ferroic materials that combine both ferroelectricity and ferromagnetism for memory devices, new piezoelectrics that do not contain lead, new lithium containing solids for application as cathode/anode/electrolyte in lithium batteries, hydrogen storage materials for mobile/transport applications and catalyst materials that can convert, for example, methane to higher hydrocarbons, and the list is endless! Fortunately for us, chemistry - inorganic chemistry in particular - plays a crucial role in this quest. Most of the functional materials mentioned above are inorganic non-molecular solids, while much of the conventional inorganic chemistry deals with isolated molecules or molecular solids. Even so, the basic concepts that we learn in inorganic chemistry, for example, acidity/basicity, oxidation/reduction (potentials), crystal field theory, low spin-high spin/inner sphere-outer sphere complexes, role of d-electrons in transition metal chemistry, electron-transfer reactions, coordination geometries around metal atoms, Jahn-Teller distortion, metal-metal bonds, cation-anion (metal-nonmetal) redox competition in the stabilization of oxidation states - all find crucial application in the design and synthesis of inorganic solids possessing technologically important properties. An attempt has been made here to illustrate the role of inorganic chemistry in this endeavour, drawing examples from the literature its well as from the research work of my group.
Resumo:
We study the performance of greedy scheduling in multihop wireless networks where the objective is aggregate utility maximization. Following standard approaches, we consider the dual of the original optimization problem. Optimal scheduling requires selecting independent sets of maximum aggregate price, but this problem is known to be NP-hard. We propose and evaluate a simple greedy heuristic. Analytical bounds on performance are provided and simulations indicate that the greedy heuristic performs well in practice.
Resumo:
This study examined the posited link between networked governance (the activities of NGOs and the media) and the anti-bribery disclosures of two global telecommunication companies. Based on a joint consideration of legitimacy theory, media agenda setting theory and responsive regulation, the findings show that anti-bribery disclosures are positively associated with the activities of the media and NGO initiatives. The findings also show that companies make anti-bribery disclosures to maintain symbolic legitimacy but are less prominent in effecting a substantive change in their accountability practices.
Resumo:
Bluetooth is a short-range radio technology operating in the unlicensed industrial-scientific-medical (ISM) band at 2.45 GHz. A piconet is basically a collection of slaves controlled by a master. A scatternet, on the other hand, is established by linking several piconets together in an ad hoc fashion to yield a global wireless ad hoc network. This paper proposes a scheduling policy that aims to achieve increased system throughput and reduced packet delays while providing reasonably good fairness among all traffic flows in bluetooth piconets and scatternets. We propose a novel algorithm for scheduling slots to slaves for both piconets and scatternets using multi-layered parameterized policies. Our scheduling scheme works with real data and obtains an optimal feedback policy within prescribed parameterized classes of these by using an efficient two-timescale simultaneous perturbation stochastic approximation (SPSA) algorithm. We show the convergence of our algorithm to an optimal multi-layered policy. We also propose novel polling schemes for intra- and inter-piconet scheduling that are seen to perform well. We present an extensive set of simulation results and performance comparisons with existing scheduling algorithms. Our results indicate that our proposed scheduling algorithm performs better overall on a wide range of experiments over the existing algorithms for both piconets (Das et al. in INFOCOM, pp. 591–600, 2001; Lapeyrie and Turletti in INFOCOM conference proceedings, San Francisco, US, 2003; Shreedhar and Varghese in SIGCOMM, pp. 231–242, 1995) and scatternets (Har-Shai et al. in OPNETWORK, 2002; Saha and Matsumot in AICT/ICIW, 2006; Tan and Guttag in The 27th annual IEEE conference on local computer networks(LCN). Tampa, 2002). Our studies also confirm that our proposed scheme achieves a high throughput and low packet delays with reasonable fairness among all the connections.
Resumo:
The problem of admission control of packets in communication networks is studied in the continuous time queueing framework under different classes of service and delayed information feedback. We develop and use a variant of a simulation based two timescale simultaneous perturbation stochastic approximation (SPSA) algorithm for finding an optimal feedback policy within the class of threshold type policies. Even though SPSA has originally been designed for continuous parameter optimization, its variant for the discrete parameter case is seen to work well. We give a proof of the hypothesis needed to show convergence of the algorithm on our setting along with a sketch of the convergence analysis. Extensive numerical experiments with the algorithm are illustrated for different parameter specifications. In particular, we study the effect of feedback delays on the system performance.
Resumo:
A distributed system is a collection of networked autonomous processing units which must work in a cooperative manner. Currently, large-scale distributed systems, such as various telecommunication and computer networks, are abundant and used in a multitude of tasks. The field of distributed computing studies what can be computed efficiently in such systems. Distributed systems are usually modelled as graphs where nodes represent the processors and edges denote communication links between processors. This thesis concentrates on the computational complexity of the distributed graph colouring problem. The objective of the graph colouring problem is to assign a colour to each node in such a way that no two nodes connected by an edge share the same colour. In particular, it is often desirable to use only a small number of colours. This task is a fundamental symmetry-breaking primitive in various distributed algorithms. A graph that has been coloured in this manner using at most k different colours is said to be k-coloured. This work examines the synchronous message-passing model of distributed computation: every node runs the same algorithm, and the system operates in discrete synchronous communication rounds. During each round, a node can communicate with its neighbours and perform local computation. In this model, the time complexity of a problem is the number of synchronous communication rounds required to solve the problem. It is known that 3-colouring any k-coloured directed cycle requires at least ½(log* k - 3) communication rounds and is possible in ½(log* k + 7) communication rounds for all k ≥ 3. This work shows that for any k ≥ 3, colouring a k-coloured directed cycle with at most three colours is possible in ½(log* k + 3) rounds. In contrast, it is also shown that for some values of k, colouring a directed cycle with at most three colours requires at least ½(log* k + 1) communication rounds. Furthermore, in the case of directed rooted trees, reducing a k-colouring into a 3-colouring requires at least log* k + 1 rounds for some k and possible in log* k + 3 rounds for all k ≥ 3. The new positive and negative results are derived using computational methods, as the existence of distributed colouring algorithms corresponds to the colourability of so-called neighbourhood graphs. The colourability of these graphs is analysed using Boolean satisfiability (SAT) solvers. Finally, this thesis shows that similar methods are applicable in capturing the existence of distributed algorithms for other graph problems, such as the maximal matching problem.
Resumo:
This paper reports new results concerning the capabilities of a family of service disciplines aimed at providing per-connection end-to-end delay (and throughput) guarantees in high-speed networks. This family consists of the class of rate-controlled service disciplines, in which traffic from a connection is reshaped to conform to specific traffic characteristics, at every hop on its path. When used together with a scheduling policy at each node, this reshaping enables the network to provide end-to-end delay guarantees to individual connections. The main advantages of this family of service disciplines are their implementation simplicity and flexibility. On the other hand, because the delay guarantees provided are based on summing worst case delays at each node, it has also been argued that the resulting bounds are very conservative which may more than offset the benefits. In particular, other service disciplines such as those based on Fair Queueing or Generalized Processor Sharing (GPS), have been shown to provide much tighter delay bounds. As a result, these disciplines, although more complex from an implementation point-of-view, have been considered for the purpose of providing end-to-end guarantees in high-speed networks. In this paper, we show that through ''proper'' selection of the reshaping to which we subject the traffic of a connection, the penalty incurred by computing end-to-end delay bounds based on worst cases at each node can be alleviated. Specifically, we show how rate-controlled service disciplines can be designed to outperform the Rate Proportional Processor Sharing (RPPS) service discipline. Based on these findings, we believe that rate-controlled service disciplines provide a very powerful and practical solution to the problem of providing end-to-end guarantees in high-speed networks.
Resumo:
Over the years, new power requirements for telecommunication, space, automotive and traction applications have arisen which need to be met. Although lead-acid and nickel-cadmium storage batteries continue to be the work horses with limited advances, associated environmental hazards and recycling are still the issues to be resolved. As a result, lead-acid and nickel-cadmium storage batteries have declined in importance whilst nickel-metal hydride and lithium secondary batteries with superior performances have shown greater acceptability in newer applications. These developments are reflected in this article.
Resumo:
We consider a system comprising a finite number of nodes, with infinite packet buffers, that use unslotted ALOHA with Code Division Multiple Access (CDMA) to share a channel for transmitting packetised data. We propose a simple model for packet transmission and retransmission at each node, and show that saturation throughput in this model yields a sufficient condition for the stability of the packet buffers; we interpret this as the capacity of the access method. We calculate and compare the capacities of CDMA-ALOHA (with and without code sharing) and TDMA-ALOHA; we also consider carrier sensing and collision detection versions of these protocols. In each case, saturation throughput can be obtained via analysis pf a continuous time Markov chain. Our results show how saturation throughput degrades with code-sharing. Finally, we also present some simulation results for mean packet delay. Our work is motivated by optical CDMA in which "chips" can be optically generated, and hence the achievable chip rate can exceed the achievable TDMA bit rate which is limited by electronics. Code sharing may be useful in the optical CDMA context as it reduces the number of optical correlators at the receivers. Our throughput results help to quantify by how much the CDMA chip rate should exceed the TDMA bit rate so that CDMA-ALOHA yields better capacity than TDMA-ALOHA.
Resumo:
The steady state throughput performance of distributed applications deployed in switched networks in presence of end-system bottlenecks is studied in this paper. The effect of various limitations at an end-system is modelled as an equivalent transmission capacity limitation. A class of distributed applications is characterised by a static traffic distribution matrix that determines the communication between various components of the application. It is found that uniqueness of steady state throughputs depends only on the traffic distribution matrix and that some applications (e.g., broadcast applications) can yield non-unique values for the steady state component throughputs. For a given switch capacity, with traffic distribution that yield fair unique throughputs, the trade-off between the end-system capacity and the number of application components is brought out. With a proposed distributed rate control, it has been illustrated that it is possible to have unique solution for certain traffic distributions which is otherwise impossible. Also, by proper selection of rate control parameters, various throughput performance objectives can be realised.
Resumo:
In Universal Mobile Telecommunication Systems (UMTS), the Downlink Shared Channel (DSCH) can be used for providing streaming services. The traffic model for streaming services is different from the commonly used continuously- backlogged model. Each connection specifies a required service rate over an interval of time, k, called the "control horizon". In this paper, our objective is to determine how k DSCH frames should be shared among a set of I connections. We need a scheduler that is efficient and fair and introduce the notion of discrepancy to balance the conflicting requirements of aggregate throughput and fairness. Our motive is to schedule the mobiles in such a way that the schedule minimizes the discrepancy over the k frames. We propose an optimal and computationally efficient algorithm, called STEM+. The proof of the optimality of STEM+, when applied to the UMTS rate sets is the major contribution of this paper. We also show that STEM+ performs better in terms of both fairness and aggregate throughput compared to other scheduling algorithms. Thus, STEM+ achieves both fairness and efficiency and is therefore an appealing algorithm for scheduling streaming connections.
Resumo:
The telecommunication, broadcasting and other instrumented towers carry power and/or signal cables from their ground end to their upper regions. During a direct hit to the tower, significant induction can occur to these mounted cables. In order to provide adequate protection to the equipments connected to them, protection schemes have been evolved in the literature. Development of more effective protection schemes requires a quantitative knowledge on various parameters. However, such quantitative knowledge is difficult to find at present. Amongst several of these aspects, the present work aims to investigate on the two important aspects: (i) what would be the nature of the induced currents and (ii) what will be the current sharing if as per the practice, the sheath of the cable is connected to the down conductor/tower. These aspects will be useful in design of protection schemes and also in analyzing the field structure around instrumented towers.
Resumo:
We have demonstrated novel concept of utilizing the photomechanical actuation in carbon nanotubes (CNTs) to tune and reversibly switch the Bragg wavelength. When fiber Bragg grating coated with CNTs (CNT-FBG) is exposed externally to a wide range of optical wavelengths, e. g., ultraviolet to infrared (0.2-200 mu m), a strain is induced in the CNTs which alters the grating pitch and refractive index in the CNT-FBG system resulting in a shift in the Bragg wavelength. This novel approach will find applications in telecommunication, sensors and actuators, and also for real time monitoring of the photomechanical actuation in nanoscale materials. (C) 2013 AIP Publishing LLC.