964 resultados para governance networks
Resumo:
A membrane with interpenetrating networks between poly(vinyl alcohol) (PVA) and poly(styrene sulfonic acid) (PSSA) coupled with a high proton conductivity is realized and evaluated as a proton exchange membrane electrolyte for a direct methanol fuel cell (DMFC). Its reduced methanol permeability and improved performance in DMFCs suggest the new blend as an alternative membrane to Nafion membranes. The membrane has been characterized by powder X-ray diffraction, scanning electron microscopy, time-modulated differential scanning calorimetry, and thermogravimetric analysis in conjunction with its mechanical strength. The maximum proton conductivity of 3.3×10−2 S/cm for the PVA–PSSA blend membrane is observed at 373 K. From nuclear magnetic resonance imaging and volume localized spectroscopy experiments, the PVA–PSSA membrane has been found to exhibit a promising methanol impermeability, in DMFCs. On evaluating its utility in a DMFC, it has been found that a peak power density of 90 mW/cm2 at a load current density of 320 mA/cm2 is achieved with the PVA–PSSA membrane compared to a peak power density of 75 mW/cm2 at a load current density of 250 mA/cm2 achievable for a DMFC employing Nafion membrane electrolyte while operating under identical conditions; this is attributed primarily to the methanol crossover mitigating property of the PVA–PSSA membrane.
Resumo:
Single-symbol maximum likelihood (ML) decodable distributed orthogonal space-time block codes (DOST- BCs) have been introduced recently for cooperative networks and an upper-bound on the maximal rate of such codes along with code constructions has been presented. In this paper, we introduce a new class of distributed space-time block codes (DSTBCs) called semi-orthogonal precoded distributed single-symbol decodable space-time block codes (Semi-SSD-PDSTBCs) wherein, the source performs preceding on the information symbols before transmitting it to all the relays. A set of necessary and sufficient conditions on the relay matrices for the existence of semi-SSD- PDSTBCs is proved. It is shown that the DOSTBCs are a special case of semi-SSD-PDSTBCs. A subset of semi-SSD-PDSTBCs having diagonal covariance matrix at the destination is studied and an upper bound on the maximal rate of such codes is derived. The bounds obtained are approximately twice larger than that of the DOSTBCs. A systematic construction of Semi- SSD-PDSTBCs is presented when the number of relays K ges 4 and the constructed codes are shown to have higher rates than that of DOSTBCs.
Resumo:
Communication and Political Crisis explores the role of the global media in a period of intensifying geopolitical conflict. Through case studies drawn from domestic and international political crises such as the conflicts in the Middle East and Ukraine, leading media scholar Brian McNair argues that the digitized, globalized public sphere now confronted by all political actors has produced new opportunities for social progress and democratic reform, as well as new channels for state propaganda and terrorist spectaculars such as those performed by the Islamic State and Al Qaeda. In this major work, McNair argues that the role of digital communication will be crucial in determining the outcome of pressing global issues such as the future of feminism and gay rights, freedom of speech and media, and democracy itself.
Resumo:
Three-dimensional achiral coordination polymers of the general formula M2(D, l-NHCH (COO)CH2COO)2·C4H4N2 where M = Ni and Co and pyrazine acts as the linker molecule have been prepared under hydrothermal conditions starting with [M(L-NHCH(COO)CH2COO)·3H2O] possessing a helical chain structure. A three-dimensional hybrid compound of the formula Pb2.5[N{CH(COO) CH2COO}22H2O] has also been prepared hydrothermally starting with aspartic acid and Pb(NO3)2. In this lead compound, where a secondary amine formed by the dimerisation of aspartic acid acts as the ligand, there is two-dimensional inorganic connectivity and one-dimensional organic connectivity.
Resumo:
We consider a scenario in which a wireless sensor network is formed by randomly deploying n sensors to measure some spatial function over a field, with the objective of computing a function of the measurements and communicating it to an operator station. We restrict ourselves to the class of type-threshold functions (as defined in the work of Giridhar and Kumar, 2005), of which max, min, and indicator functions are important examples: our discussions are couched in terms of the max function. We view the problem as one of message-passing distributed computation over a geometric random graph. The network is assumed to be synchronous, and the sensors synchronously measure values and then collaborate to compute and deliver the function computed with these values to the operator station. Computation algorithms differ in (1) the communication topology assumed and (2) the messages that the nodes need to exchange in order to carry out the computation. The focus of our paper is to establish (in probability) scaling laws for the time and energy complexity of the distributed function computation over random wireless networks, under the assumption of centralized contention-free scheduling of packet transmissions. First, without any constraint on the computation algorithm, we establish scaling laws for the computation time and energy expenditure for one-time maximum computation. We show that for an optimal algorithm, the computation time and energy expenditure scale, respectively, as Theta(radicn/log n) and Theta(n) asymptotically as the number of sensors n rarr infin. Second, we analyze the performance of three specific computation algorithms that may be used in specific practical situations, namely, the tree algorithm, multihop transmission, and the Ripple algorithm (a type of gossip algorithm), and obtain scaling laws for the computation time and energy expenditure as n rarr infin. In particular, we show that the computation time for these algorithms scales as Theta(radicn/lo- g n), Theta(n), and Theta(radicn log n), respectively, whereas the energy expended scales as , Theta(n), Theta(radicn/log n), and Theta(radicn log n), respectively. Finally, simulation results are provided to show that our analysis indeed captures the correct scaling. The simulations also yield estimates of the constant multipliers in the scaling laws. Our analyses throughout assume a centralized optimal scheduler, and hence, our results can be viewed as providing bounds for the performance with practical distributed schedulers.
Resumo:
Formation of high value procurement networks involves a bottom-up assembly of complex production, assembly, and exchange relationships through supplier selection and contracting decisions, where suppliers are intelligent and rational agents who act strategically. In this paper we address the problem of forming procurement networks for items with value adding stages that are linearly arranged We model the problem of Procurement Network Formation (PNF) for multiple units of a single item as a cooperative game where agents cooperate to form a surplus maximizing procurement network and then share the surplus in a stable and fair manner We first investigate the stability of such networks by examining the conditions under which the core of the game is non-empty. We then present a protocol, based on the extensive form game realization of the core, for forming such networks so that the resulting network is stable. We also mention a key result when the Shapley value is applied as a solution concept.
Resumo:
The Body Area Network (BAN) is an emerging technology that focuses on monitoring physiological data in, on and around the human body. BAN technology permits wearable and implanted sensors to collect vital data about the human body and transmit it to other nodes via low-energy communication. In this paper, we investigate interactions in terms of data flows between parties involved in BANs under four different scenarios targeting outdoor and indoor medical environments: hospital, home, emergency and open areas. Based on these scenarios, we identify data flow requirements between BAN elements such as sensors and control units (CUs) and parties involved in BANs such as the patient, doctors, nurses and relatives. Identified requirements are used to generate BAN data flow models. Petri Nets (PNs) are used as the formal modelling language. We check the validity of the models and compare them with the existing related work. Finally, using the models, we identify communication and security requirements based on the most common active and passive attack scenarios.
Resumo:
With the level of digital disruption that is affecting businesses around the globe, you might expect high levels of Governance of Enterprise Information and Technology (GEIT) capability within boards. Boards and their senior executives know technology is important. More than 90% of boards and senior executives currently identify technology as essential to their current businesses, and to their organization’s future. But as few as 16% have sufficient GEIT capability. Global Centre for Digital Business Transformation’s recent research contains strong indicators of the need for change. Despite board awareness of both the likelihood and impact of digital disruption, things digital are still not viewed as a board-level matter in 45% of companies. And, it’s not just the board. The lack of board attention to technology can be mirrored at senior executive level as well. When asked about their organization’s attitude towards digital disruption, 43% of executives said their business either did not recognise it as a priority or was not responding appropriately. A further 32% were taking a “follower” approach, a potentially risky move as we will explain. Given all the evidence that boards know information and technology (I&T***) is vital, that they understand the inevitably, impact and speed of digital change and disruption, why are so many boards dragging their heels? Ignoring I&T disruption and refusing to build capability at board level is nothing short of negligence. Too many boards risk flying blind without GEIT capability [2]. To help build decision quality and I&T governance capability, this research: • Confirms a pressing need to build individual competency and cumulative, across-board capability in governing I&T • Identifies six factors that have rapidly increased the need, risk and urgency • Finds that boards may risk not meeting their duty of care responsibilities when it comes to I&T oversight • Highlights barriers to building capability details three GEIT competencies that boards and executives can use for evaluation, selection, recruitment and professional development.
Resumo:
Information and technology and its use in organisation transformation presents unprecedented opportunities and risks. Increasingly, the Governance of Enterprise Information and Technology (GEIT) competency in the board room and executive is needed. Whether your organization is small or large, public, private or not for profit or whether your industry is not considered high-tech, IT is impacting your sector – no exceptions. But there is a skill shortage in boards: GEIT capability is concerningly low. This capability is urgently needed across the board, including those directors who come from finance, legal, marketing, operations and HR backgrounds. Digital disruption also affects all occupations. Putting in place a vision will help ensure emergency responses will meet technology-related duty of care responsibilities. When GEIT-related forward thinking and planning is carried out at the same time that you put your business strategy and plan in place, your organization has a significantly increased chance of not only surviving, but thriving into the future. Those organizations that don’t build GEIT capability risk joining the growing list of once-leading firms left behind in the digital ‘cloud of smoke’. Those organizations that do will be better placed to reap the benefits and hedge against the risks of a digital world. This chapter provides actionable, research-based considerations and processes for boards to use, to build awareness, knowledge and skills in governing technology-related organization strategy, risk and value creation.
Resumo:
The object of the dissertation is to analyse the concept of social responsibility in relation to research and development of new biotechnology. This is done by examining the relevant actors – researchers, administrators, decision-makers, experts, industry, and the public – involved in the Finnish governance of biotechnology through their roles and responsibilities. Existing practises of responsibility in biotechnology governance, as well as the discourses of responsibility – the actors’ conceptions of their own and others responsibilities – are analysed. Three types of responsibility that the actors have assumed are formulated, and the implications of these conceptions to the governance of new biotechnology are analysed. From these different types of responsibility adopted and used by the actors, theoretical models called responsibility chains are constructed. The notion of responsibility is under-theorised in sociology and this research is an attempt to create a mid-range theory of responsibility in the context of biotechnology governance. The research aims to increase understanding of the governance system from a holistic viewpoint by contributing to academic debates on science and technology policy, public understanding of science, commercialisation of research, and corporate social responsibility. With a thorough analysis of the concept of responsibility that is derived from empirical data, the research brings new perspectives into these debates by challenging many normative ideas embedded in discourses. For example, multiple roles of the public are analysed to highlight the problems of consumerism and citizen participation in practise, as well as in relation to different policy strategies. The research examines also the contradictory responsibilities faced by biotechnology researchers, who balance between academic autonomy, commercialisation of research, and reflecting social consequences of their work. Industries responsibilities are also examined from the viewpoint of biotechnology. The research methodology addresses the contradictions between empirical findings, theories of biotechnology governance, and policies in a novel way, as the study concentrates on several actors and investigates both the discourses and the practises of the actors. Thus, the qualitative method of analysis is a combination of discourse and content analysis. The empirical material is comprised of 29 personal interviews as well as documents by Finnish and multinational organizations on biotechnology governance.
Resumo:
Considerable empirical research substantiates the importance of social networks on health and well-being in later life. A study of ethnic minority elders living in two low income public housing buildings in East Harlem was undertaken to gain an understanding of the relationship between their health status and social networks. Findings demonstrate that elders with supportive housing had better psychological outcomes and used significantly more informal supports when in need. However, elders with serious health problems had poorer outcomes regardless of their level of social support. This study highlights the potential of supportive living environments to foster social integration and to optimise formal and informal networks.
Resumo:
Since the 1990s, European policy strategies have stressed the mutual responsibility and joint action of all societal branches in preventing social problems. Network policy is an integral part of the new governance that generates a new kind of dependency between the state and civil society in formulating and adhering to policy goals. Using empirical group interview data collected in Helsinki, the capital of Finland, this case study explores local multi-agency groups and their efforts to prevent the exclusion of children and young people. These groups consist mainly of professionals from the social office, youth clubs and schools. The study shows that these multi-agency groups serve as forums for professional negotiation where the intervention dilemma of liberal society can be addressed: the question of when it is justified and necessary for an authority or network to intervene in the life of children and their families, and how this is to be done. An element of tension in multi-agency prevention is introduced by the fact that its objectives and means are anchored both in the old tradition of the welfare state and in communitarian rhetoric. Thus multi-agency groups mend deficiencies in wellbeing and normalcy while at the same time try to co-ordinate the creation of the new community, which will hopefully reduce the burden on the public sector. Some of the professionals interviewed were keen to see new and even forceful interventions to guide the youth or to compel parents to assume their responsibilities. In group discussions, this approach often met resistance. The deeper the social problems that the professionals worked with, the more solidarity they showed for the families or the young people in need. Nothing seems to assure professionals and to legitimise their professional position better than advocating the under-privileged against the uncertainties of life and the structural inequalities of society. The groups that grappled with the clear, specific needs of certain children and families were the most capable of co-operation. This requires the approval of different powers and the expertise of distinct professions as well as a forum to negotiate case-specific actions in professional confidentiality. The ideals of primary prevention for everyone and value discussions alone fail to inspire sufficient multiagency co-operation. The ideal of a network seems to give word and shape to those societal goals that are difficult or even impossible to reach, but are nevertheless yearned for: mutual understanding of the good life, close social relationships, mutual trust and active agency for all citizens. Individualisation, the multiplicity of life styles and the possibility to choose have come true in such a way that the very idea of a mutual and binding network can be attained only momentarily and between restricted participants. In conclusion, uniting professional networks that negotiate intervention dilemmas with citizen networks based on changing compassions and feelings of moral superiority seems impossible. Rather, one should encourage openness to scrutiny among tangential or contradicting groups, networks and communities. Key words: network policy, prevention of exclusion, multi-agency groups, young people
Resumo:
In wireless ad hoc networks, nodes communicate with far off destinations using intermediate nodes as relays. Since wireless nodes are energy constrained, it may not be in the best interest of a node to always accept relay requests. On the other hand, if all nodes decide not to expend energy in relaying, then network throughput will drop dramatically. Both these extreme scenarios (complete cooperation and complete noncooperation) are inimical to the interests of a user. In this paper, we address the issue of user cooperation in ad hoc networks. We assume that nodes are rational, i.e., their actions are strictly determined by self interest, and that each node is associated with a minimum lifetime constraint. Given these lifetime constraints and the assumption of rational behavior, we are able to determine the optimal share of service that each node should receive. We define this to be the rational Pareto optimal operating point. We then propose a distributed and scalable acceptance algorithm called Generous TIT-FOR-TAT (GTFT). The acceptance algorithm is used by the nodes to decide whether to accept or reject a relay request. We show that GTFT results in a Nash equilibrium and prove that the system converges to the rational and optimal operating point.
Resumo:
The problem of scheduling divisible loads in distributed computing systems, in presence of processor release time is considered. The objective is to find the optimal sequence of load distribution and the optimal load fractions assigned to each processor in the system such that the processing time of the entire processing load is a minimum. This is a difficult combinatorial optimization problem and hence genetic algorithms approach is presented for its solution.
Resumo:
An ad hoc network is composed of mobile nodes without any infrastructure. Recent trends in applications of mobile ad hoc networks rely on increased group oriented services. Hence multicast support is critical for ad hoc networks. We also need to provide service differentiation schemes for different group of users. An efficient application layer multicast (APPMULTICAST) solution suitable for low mobility applications in MANET environment has been proposed in [10]. In this paper, we present an improved application layer multicast solution suitable for medium mobility applications in MANET environment. We define multicast groups with low priority and high priority and incorporate a two level service differentiation scheme. We use network layer support to build the overlay topology closer to the actual network topology. We try to maximize Packet Delivery Ratio. Through simulations we show that the control overhead for our algorithm is within acceptable limit and it achieves acceptable Packet Delivery Ratio for medium mobility applications.