983 resultados para Corporate networks
Resumo:
In earlier work, nonisomorphic graphs have been converted into networks to realize Multistage Interconnection networks, which are topologically nonequivalent to the Baseline network. The drawback of this technique is that these nonequivalent networks are not guaranteed to be self-routing, because each node in the graph model can be replaced by a (2 × 2) switch in any one of the four different configurations. Hence, the problem of routing in these networks remains unsolved. Moreover, nonisomorphic graphs were obtained by interconnecting bipartite loops in a heuristic manner; the heuristic nature of this procedure makes it difficult to guarantee full connectivity in large networks. We solve these problems through a direct approach, in which a matrix model for self-routing networks is developed. An example is given to show that this model encompases nonequivalent self-routing networks. This approach has the additional advantage in that the matrix model itself ensures full connectivity.
Resumo:
Previous research has been inconclusive regarding the impact of those who invest in entrepreneurs. Consider for a moment how potentially important they are to entrepreneurs. They for example decide who deserves funding, how much time they contribute to their portfolio firms, how they grant entrepreneurs access to their networks, and help entrepreneurs acquire additional funding. In sum, investors potentially have a great impact on the success of entrepreneurs. It is therefore important that we better understand the environment, relationships and context in which parties operate. This thesis contains five articles that explore investors’ and entrepreneurs’ relationships from various viewpoints, in theoretical frameworks, and use a variety of data and research methods. The first article is a literature review that summarises what we know of venture capital, business angel and corporate venture capital funding. The second article studies the entrepreneurs’ investor selection process, its consequences, and identifies key factors that influence the process. Earlier, the common approach has been to concentrate research on the investors’ selection policy, not the entrepreneurs’. The data and conclusions are based on multiple case studies. The article analyses how entrepreneurs can ensure that they get the best possible investor, when it is possible for an entrepreneur to select an investor, and what are the consequences of investor selection. The third article employs power constructs (dependency, power balance/imbalance, power sources) and analyses their applicability in the investor-entrepreneur relationship. Power constructs are extensively studied and utilised in the management and organisation literature. In entrepreneur investor relationships, power aspects are rarely analysed. However, having the ability to “get others to do things they would not otherwise do” is a very common factor in the investor-entrepreneur relationship. Therefore, employing and analysing the applicability of power constructs in this setting is well founded. The article is based on a single case study but suggests that power constructs could be applicable and consequently provide additional insights into the investor-entrepreneur relationship. The fourth article studies the role of advisors in the venture capital investment process and analyses implications for research and practice, particularly from the entrepreneurs’ perspective. The common entrepreneurial finance literature describes the entrepreneur-investor relationship as linear and bilateral. However, it was discovered that advisors may influence the relationship. In this article, the role of advisors, operating procedures and advisors’ impact on different parties is analysed. The fifth article concentrates on investors’ certification effect. The article measures and demonstrates that venture capital investment is likely to increase the credibility (in terms of media attention) of early stage firms, those that most often need additional credibility. Understanding investor certification can affect how entrepreneurs evaluate investment offers and how investors can make their offers appear more lucrative.
Resumo:
This dissertation is a broad study of factors affecting perceptions of CSR issues in multiple stakeholder realms, the main purpose being to determine the effects of the values of individuals on their perceptions regarding CSR. It examines perceptions of CSR both at the emic (observing individuals and stakeholders) and etic levels (conducting cross-cultural comparison) through a descriptive-empirical research strategy. The dissertation is based on quantitative interview data among Chinese, Finnish and US stakeholder groups of industry companies (with an emphasis on the forest industries) and consists of four published articles and two submitted manuscripts. Theoretically, this dissertation provides a valuable and unique philosophical and intellectual perspective on the contemporary study of CSR `The Harmony Approach to CSR'. Empirically, this dissertation does values assessment and CSR evaluation of a wide variety of business activities covering CSR reporting, business ethics, and three dimensions of CSR performance. From the multi-stakeholder perspective, this dissertation use survey methods to examine the perceptions and stakeholder salience in the context of CSR by describing, comparing the differences between demographic factors as well as hypothetical drivers behind perceptions. The results of study suggest that the CSR objective of a corporation's top management should be to manage the divergent and conflicting interests of multiple stakeholders, taking others than key stakeholders into account as well. The importance of values as a driver of ethical behaviour and decision-making has been generally recognized. This dissertation provides more empirical proof of this theory by highlighting the effects of values on CSR perceptions. It suggests that since the way to encourage responsible behaviour and develop CSR is to develop individual values and cultivate their virtues, it is time to invoke the critical role of moral (ethics) education. The specific studies of China and comparison between Finland and the US contribute to a common understanding of the emerging CSR issues, problems and opportunities for the future of sustainability. The similarities among these countries can enhance international cooperation, while the differences will open up opportunities and diversified solutions for CSR in local conditions.
Resumo:
The world of mapping has changed. Earlier, only professional experts were responsible for map production, but today ordinary people without any training or experience can become map-makers. The number of online mapping sites, and the number of volunteer mappers has increased significantly. The development of the technology, such as satellite navigation systems, Web 2.0, broadband Internet connections, and smartphones, have had one of the key roles in enabling the rise of volunteered geographic information (VGI). As opening governmental data to public is a current topic in many countries, the opening of high quality geographical data has a central role in this study. The aim of this study is to investigate how is the quality of spatial data produced by volunteers by comparing it with the map data produced by public authorities, to follow what occurs when spatial data are opened for users, and to get acquainted with the user profile of these volunteer mappers. A central part of this study is OpenStreetMap project (OSM), which aim is to create a map of the entire world by volunteers. Anyone can become an OpenStreetMap contributor, and the data created by the volunteers are free to use for anyone without restricting copyrights or license charges. In this study OpenStreetMap is investigated from two viewpoints. In the first part of the study, the aim was to investigate the quality of volunteered geographic information. A pilot project was implemented by following what occurs when a high-resolution aerial imagery is released freely to the OpenStreetMap contributors. The quality of VGI was investigated by comparing the OSM datasets with the map data of The National Land Survey of Finland (NLS). The quality of OpenStreetMap data was investigated by inspecting the positional accuracy and the completeness of the road datasets, as well as the differences in the attribute datasets between the studied datasets. Also the OSM community was under analysis and the development of the map data of OpenStreetMap was investigated by visual analysis. The aim of the second part of the study was to analyse the user profile of OpenStreetMap contributors, and to investigate how the contributors act when collecting data and editing OpenStreetMap. The aim was also to investigate what motivates users to map and how is the quality of volunteered geographic information envisaged. The second part of the study was implemented by conducting a web inquiry to the OpenStreetMap contributors. The results of the study show that the quality of OpenStreetMap data compared with the data of National Land Survey of Finland can be defined as good. OpenStreetMap differs from the map of National Land Survey especially because of the amount of uncertainty, for example because of the completeness and uniformity of the map are not known. The results of the study reveal that opening spatial data increased notably the amount of the data in the study area, and both the positional accuracy and completeness improved significantly. The study confirms the earlier arguments that only few contributors have created the majority of the data in OpenStreetMap. The inquiry made for the OpenStreetMap users revealed that the data are most often collected by foot or by bicycle using GPS device, or by editing the map with the help of aerial imageries. According to the responses, the users take part to the OpenStreetMap project because they want to make maps better, and want to produce maps, which have information that is up-to-date and cannot be found from any other maps. Almost all of the users exploit the maps by themselves, most popular methods being downloading the map into a navigator or into a mobile device. The users regard the quality of OpenStreetMap as good, especially because of the up-to-dateness and the accuracy of the map.
Resumo:
We propose a method to compute a probably approximately correct (PAC) normalized histogram of observations with a refresh rate of Theta(1) time units per histogram sample on a random geometric graph with noise-free links. The delay in computation is Theta(root n) time units. We further extend our approach to a network with noisy links. While the refresh rate remains Theta(1) time units per sample, the delay increases to Theta(root n log n). The number of transmissions in both cases is Theta(n) per histogram sample. The achieved Theta(1) refresh rate for PAC histogram computation is a significant improvement over the refresh rate of Theta(1/log n) for histogram computation in noiseless networks. We achieve this by operating in the supercritical thermodynamic regime where large pathways for communication build up, but the network may have more than one component. The largest component however will have an arbitrarily large fraction of nodes in order to enable approximate computation of the histogram to the desired level of accuracy. Operation in the supercritical thermodynamic regime also reduces energy consumption. A key step in the proof of our achievability result is the construction of a connected component having bounded degree and any desired fraction of nodes. This construction may also prove useful in other communication settings on the random geometric graph.
Resumo:
We present a distributed algorithm that finds a maximal edge packing in O(Δ + log* W) synchronous communication rounds in a weighted graph, independent of the number of nodes in the network; here Δ is the maximum degree of the graph and W is the maximum weight. As a direct application, we have a distributed 2-approximation algorithm for minimum-weight vertex cover, with the same running time. We also show how to find an f-approximation of minimum-weight set cover in O(f2k2 + fk log* W) rounds; here k is the maximum size of a subset in the set cover instance, f is the maximum frequency of an element, and W is the maximum weight of a subset. The algorithms are deterministic, and they can be applied in anonymous networks.
Resumo:
This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating–dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating–dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs – these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating–dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.
Resumo:
The use of energy harvesting (EH) nodes as cooperative relays is a promising and emerging solution in wireless systems such as wireless sensor networks. It harnesses the spatial diversity of a multi-relay network and addresses the vexing problem of a relay's batteries getting drained in forwarding information to the destination. We consider a cooperative system in which EH nodes volunteer to serve as amplify-and-forward relays whenever they have sufficient energy for transmission. For a general class of stationary and ergodic EH processes, we introduce the notion of energy constrained and energy unconstrained relays and analytically characterize the symbol error rate of the system. Further insight is gained by an asymptotic analysis that considers the cases where the signal-to-noise-ratio or the number of relays is large. Our analysis quantifies how the energy usage at an EH relay and, consequently, its availability for relaying, depends not only on the relay's energy harvesting process, but also on its transmit power setting and the other relays in the system. The optimal static transmit power setting at the EH relays is also determined. Altogether, our results demonstrate how a system that uses EH relays differs in significant ways from one that uses conventional cooperative relays.
Resumo:
In rapid parallel magnetic resonance imaging, the problem of image reconstruction is challenging. Here, a novel image reconstruction technique for data acquired along any general trajectory in neural network framework, called ``Composite Reconstruction And Unaliasing using Neural Networks'' (CRAUNN), is proposed. CRAUNN is based on the observation that the nature of aliasing remains unchanged whether the undersampled acquisition contains only low frequencies or includes high frequencies too. Here, the transformation needed to reconstruct the alias-free image from the aliased coil images is learnt, using acquisitions consisting of densely sampled low frequencies. Neural networks are made use of as machine learning tools to learn the transformation, in order to obtain the desired alias-free image for actual acquisitions containing sparsely sampled low as well as high frequencies. CRAUNN operates in the image domain and does not require explicit coil sensitivity estimation. It is also independent of the sampling trajectory used, and could be applied to arbitrary trajectories as well. As a pilot trial, the technique is first applied to Cartesian trajectory-sampled data. Experiments performed using radial and spiral trajectories on real and synthetic data, illustrate the performance of the method. The reconstruction errors depend on the acceleration factor as well as the sampling trajectory. It is found that higher acceleration factors can be obtained when radial trajectories are used. Comparisons against existing techniques are presented. CRAUNN has been found to perform on par with the state-of-the-art techniques. Acceleration factors of up to 4, 6 and 4 are achieved in Cartesian, radial and spiral cases, respectively. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
The three dimensional structure of a protein is formed and maintained by the noncovalent interactions among the amino acid residues of the polypeptide chain These interactions can be represented collectively in the form of a network So far such networks have been investigated by considering the connections based on distances between the amino acid residues Here we present a method of constructing the structure network based on interaction energies among the amino acid residues in the protein We have investigated the properties of such protein energy based networks (PENs) and have shown correlations to protein structural features such as the clusters of residues involved in stability formation of secondary and super secondary structural units Further we demonstrate that the analysis of PENs in terms of parameters such as hubs and shortest paths can provide a variety of biologically important information such as the residues crucial for stabilizing the folded units and the paths of communication between distal residues in the protein Finally the energy regimes for different levels of stabilization in the protein structure have clearly emerged from the PEN analysis
Resumo:
Work/family reconciliation is a crucial question for both personal well-being and on societal level for productivity and re-production throughout the Western world. This thesis examines work/family reconciliation on societal and organisational level in the Finnish context. The study is based on an initial framework, developing it further and analysing the results with help of it. The methodology of the study is plural, including varying epistemological emphasis and both quantitative and qualitative methods. Policy analysis from two different sectors is followed by a survey answered by 113 HR-managers, and then, based on quantitative analyses, interviews in four chosen case companies. The central findings of the thesis are that there indeed are written corporate level policies for reconciling work and family in companies operating in Finland, in spite of the strong state level involvement in creating a policy context in work/family reconciliation. Also, the existing policies vary in accessibility and use. The most frequently used work/family policies still are the statutory state level policies for family leave, taking place when a baby is born and during his or her first years. Still, there are new policies arising, such as a nurse for an employee’s child who has fallen ill, that are based on company activity only, which shows in both accessibility and use of the policy. Reasons for developing corporate level work/family policies vary among the so-called pro-active and re-active companies. In general, family law has a substantial effect for developing corporate level policies. Also headquarter gender equality strategies as well as employee demands are important. In regression analyses, it was found that corporate image and importance in recruitment are the foremost reasons for companies to develop policies, not for example the amount of female employees in the company. The reasons for policy development can be summarized into normative pressures, coercive pressures and mimetic pressures, in line with findings from institutional theory. This research, however, includes awareness of different stakeholder interests and recognizes that institutional theory needs to be complemented with notions of gender and family, which seem to play a part in perceived work/family conflict and need for further work/family policies both in managers’ personal lives and on the organisational level. A very central finding, demanding more attention, is the by HR managers perceived change in values towards work and commitment towards organisation at the youngest working generation, Generation Y. This combined with the need for key personnel has brought new challenges to companies especially in knowledge business and will presumably lead to further development of flexible practices in organisations. The accessibility to this flexibility seems to, however, be even more dependent on the specific knowledge and skills of the employee. How this generation will change the organisations remains to be seen in further research.
Resumo:
We propose an efficient and parameter-free scoring criterion, the factorized conditional log-likelihood (ˆfCLL), for learning Bayesian network classifiers. The proposed score is an approximation of the conditional log-likelihood criterion. The approximation is devised in order to guarantee decomposability over the network structure, as well as efficient estimation of the optimal parameters, achieving the same time and space complexity as the traditional log-likelihood scoring criterion. The resulting criterion has an information-theoretic interpretation based on interaction information, which exhibits its discriminative nature. To evaluate the performance of the proposed criterion, we present an empirical comparison with state-of-the-art classifiers. Results on a large suite of benchmark data sets from the UCI repository show that ˆfCLL-trained classifiers achieve at least as good accuracy as the best compared classifiers, using significantly less computational resources.