945 resultados para scale free network


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Institutions are widely regarded as important, even ultimate drivers of economic growth and performance. A recent mainstream of institutional economics has concentrated on the effect of persisting, often imprecisely measured institutions and on cataclysmic events as agents of noteworthy institutional change. As a consequence, institutional change without large-scale shocks has received little attention. In this dissertation I apply a complementary, quantitative-descriptive approach that relies on measures of actually enforced institutions to study institutional persistence and change over a long time period that is undisturbed by the typically studied cataclysmic events. By placing institutional change into the center of attention one can recognize different speeds of institutional innovation and the continuous coexistence of institutional persistence and change. Specifically, I combine text mining procedures, network analysis techniques and statistical approaches to study persistence and change in England’s common law over the Industrial Revolution (1700-1865). Based on the doctrine of precedent - a peculiarity of common law systems - I construct and analyze the apparently first citation network that reflects lawmaking in England. Most strikingly, I find large-scale change in the making of English common law around the turn of the 19th century - a period free from the typically studied cataclysmic events. Within a few decades a legal innovation process with low depreciation rates (1 to 2 percent) and strong past-persistence transitioned to a present-focused innovation process with significantly higher depreciation rates (4 to 6 percent) and weak past-persistence. Comparison with U.S. Supreme Court data reveals a similar U.S. transition towards the end of the 19th century. The English and U.S. transitions appear to have unfolded in a very specific manner: a new body of law arose during the transitions and developed in a self-referential manner while the existing body of law lost influence, but remained prominent. Additional findings suggest that Parliament doubled its influence on the making of case law within the first decades after the Glorious Revolution and that England’s legal rules manifested a high degree of long-term persistence. The latter allows for the possibility that the often-noted persistence of institutional outcomes derives from the actual persistence of institutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The production of artistic prints in the sixteenth- and seventeenth-century Netherlands was an inherently social process. Turning out prints at any reasonable scale depended on the fluid coordination between designers, platecutters, and publishers; roles that, by the sixteenth century, were considered distinguished enough to merit distinct credits engraved on the plates themselves: invenit, fecit/sculpsit, and excudit. While any one designer, plate cutter, and publisher could potentially exercise a great deal of influence over the production of a single print, their individual decisions (Whom to select as an engraver? What subjects to create for a print design? What market to sell to?) would have been variously constrained or encouraged by their position in this larger network (Who do they already know? And who, in turn, do their contacts know?) This dissertation addresses the impact of these constraints and affordances through the novel application of computational social network analysis to major databases of surviving prints from this period. This approach is used to evaluate several questions about trends in early modern print production practices that have not been satisfactorily addressed by traditional literature based on case studies alone: Did the social capital demanded by print production result in centralized, or distributed production of prints? When, and to what extent, did printmakers and publishers in the Low countries favor international versus domestic collaborators? And were printmakers under the same pressure as painters to specialize in particular artistic genres? This dissertation ultimately suggests how simple professional incentives endemic to the practice of printmaking may, at large scales, have resulted in quite complex patterns of collaboration and production. The framework of network analysis surfaces the role of certain printmakers who tend to be neglected in aesthetically-focused histories of art. This approach also highlights important issues concerning art historians’ balancing of individual influence versus the impact of longue durée trends. Finally, this dissertation also raises questions about the current limitations and future possibilities of combining computational methods with cultural heritage datasets in the pursuit of historical research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents approximation algorithms for some NP-Hard combinatorial optimization problems on graphs and networks; in particular, we study problems related to Network Design. Under the widely-believed complexity-theoretic assumption that P is not equal to NP, there are no efficient (i.e., polynomial-time) algorithms that solve these problems exactly. Hence, if one desires efficient algorithms for such problems, it is necessary to consider approximate solutions: An approximation algorithm for an NP-Hard problem is a polynomial time algorithm which, for any instance of the problem, finds a solution whose value is guaranteed to be within a multiplicative factor of the value of an optimal solution to that instance. We attempt to design algorithms for which this factor, referred to as the approximation ratio of the algorithm, is as small as possible. The field of Network Design comprises a large class of problems that deal with constructing networks of low cost and/or high capacity, routing data through existing networks, and many related issues. In this thesis, we focus chiefly on designing fault-tolerant networks. Two vertices u,v in a network are said to be k-edge-connected if deleting any set of k − 1 edges leaves u and v connected; similarly, they are k-vertex connected if deleting any set of k − 1 other vertices or edges leaves u and v connected. We focus on building networks that are highly connected, meaning that even if a small number of edges and nodes fail, the remaining nodes will still be able to communicate. A brief description of some of our results is given below. We study the problem of building 2-vertex-connected networks that are large and have low cost. Given an n-node graph with costs on its edges and any integer k, we give an O(log n log k) approximation for the problem of finding a minimum-cost 2-vertex-connected subgraph containing at least k nodes. We also give an algorithm of similar approximation ratio for maximizing the number of nodes in a 2-vertex-connected subgraph subject to a budget constraint on the total cost of its edges. Our algorithms are based on a pruning process that, given a 2-vertex-connected graph, finds a 2-vertex-connected subgraph of any desired size and of density comparable to the input graph, where the density of a graph is the ratio of its cost to the number of vertices it contains. This pruning algorithm is simple and efficient, and is likely to find additional applications. Recent breakthroughs on vertex-connectivity have made use of algorithms for element-connectivity problems. We develop an algorithm that, given a graph with some vertices marked as terminals, significantly simplifies the graph while preserving the pairwise element-connectivity of all terminals; in fact, the resulting graph is bipartite. We believe that our simplification/reduction algorithm will be a useful tool in many settings. We illustrate its applicability by giving algorithms to find many trees that each span a given terminal set, while being disjoint on edges and non-terminal vertices; such problems have applications in VLSI design and other areas. We also use this reduction algorithm to analyze simple algorithms for single-sink network design problems with high vertex-connectivity requirements; we give an O(k log n)-approximation for the problem of k-connecting a given set of terminals to a common sink. We study similar problems in which different types of links, of varying capacities and costs, can be used to connect nodes; assuming there are economies of scale, we give algorithms to construct low-cost networks with sufficient capacity or bandwidth to simultaneously support flow from each terminal to the common sink along many vertex-disjoint paths. We further investigate capacitated network design, where edges may have arbitrary costs and capacities. Given a connectivity requirement R_uv for each pair of vertices u,v, the goal is to find a low-cost network which, for each uv, can support a flow of R_uv units of traffic between u and v. We study several special cases of this problem, giving both algorithmic and hardness results. In addition to Network Design, we consider certain Traveling Salesperson-like problems, where the goal is to find short walks that visit many distinct vertices. We give a (2 + epsilon)-approximation for Orienteering in undirected graphs, achieving the best known approximation ratio, and the first approximation algorithm for Orienteering in directed graphs. We also give improved algorithms for Orienteering with time windows, in which vertices must be visited between specified release times and deadlines, and other related problems. These problems are motivated by applications in the fields of vehicle routing, delivery and transportation of goods, and robot path planning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Peer-to-peer information sharing has fundamentally changed customer decision-making process. Recent developments in information technologies have enabled digital sharing platforms to influence various granular aspects of the information sharing process. Despite the growing importance of digital information sharing, little research has examined the optimal design choices for a platform seeking to maximize returns from information sharing. My dissertation seeks to fill this gap. Specifically, I study novel interventions that can be implemented by the platform at different stages of the information sharing. In collaboration with a leading for-profit platform and a non-profit platform, I conduct three large-scale field experiments to causally identify the impact of these interventions on customers’ sharing behaviors as well as the sharing outcomes. The first essay examines whether and how a firm can enhance social contagion by simply varying the message shared by customers with their friends. Using a large randomized field experiment, I find that i) adding only information about the sender’s purchase status increases the likelihood of recipients’ purchase; ii) adding only information about referral reward increases recipients’ follow-up referrals; and iii) adding information about both the sender’s purchase as well as the referral rewards increases neither the likelihood of purchase nor follow-up referrals. I then discuss the underlying mechanisms. The second essay studies whether and how a firm can design unconditional incentive to engage customers who already reveal willingness to share. I conduct a field experiment to examine the impact of incentive design on sender’s purchase as well as further referral behavior. I find evidence that incentive structure has a significant, but interestingly opposing, impact on both outcomes. The results also provide insights about senders’ motives in sharing. The third essay examines whether and how a non-profit platform can use mobile messaging to leverage recipients’ social ties to encourage blood donation. I design a large field experiment to causally identify the impact of different types of information and incentives on donor’s self-donation and group donation behavior. My results show that non-profits can stimulate group effect and increase blood donation, but only with group reward. Such group reward works by motivating a different donor population. In summary, the findings from the three studies will offer valuable insights for platforms and social enterprises on how to engineer digital platforms to create social contagion. The rich data from randomized experiments and complementary sources (archive and survey) also allows me to test the underlying mechanism at work. In this way, my dissertation provides both managerial implication and theoretical contribution to the phenomenon of peer-to-peer information sharing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Social networks are a recent phenomenon of communication, with a high prevalence of young users. This concept serves as a motto for a multidisciplinary project, which aims to create a simple communication network, using light as the transmission medium. Mixed team, composed by students from secondary and higher education schools, are partners on the development of an optical transceiver. A LED lamp array and a small photodiode are the optical transmitter and receiver, respectively. Using several transceivers aligned with each other, this con guration creates a ring communication network, enabling the exchange of messages between users. Through this project, some concepts addressed in physics classes from secondary schools (e.g. photoelectric phenomena and the properties of light) are experimentally veri ed and used to communicate, in a classroom or a laboratory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Still a big gap exists between clinical and genetic diagnosis of dyslipidemic disorders. Almost the 60% of the patients with a clinical diagnosis of Familial hypercholesterolemia (FH) still lack of a genetic diagnosis. Here we present the preliminary results of an integrative approach intended to identify new candidate genes and to dissect pathways that can be dysregulated in the disease. Interesting hits will be subsequently knocked down in vitro in order to evaluate their functional role in the uptake of fluorescently-labeled LDL and free cell cholesterol using automated microscopy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This PhD thesis is an empirical research project in the field of modern Polish history. The thesis focuses on Solidarity, the Network and the idea of workers’ self-management. In addition, the thesis is based on an in-depth analysis of Solidarity archival material. The Solidarity trade union was born in August 1980 after talks between the communist government and strike leaders at the Gdansk Lenin Shipyards. In 1981 a group called the Network rose up, due to cooperation between Poland’s great industrial factory plants. The Network grew out of Solidarity; it was made up of Solidarity activists, and the group acted as an economic partner to the union. The Network was the base of a grass-roots, nationwide workers’ self-management movement. Solidarity and the self-management movement were crushed by the imposition of Martial Law in December 1981. Solidarity revived itself immediately, and the union created an underground society. The Network also revived in the underground, and it continued to promote self-management activity where this was possible. When Solidarity regained its legal status in April 1989, workers’ self-management no longer had the same importance in the union. Solidarity’s new politico-economic strategy focused on free markets, foreign investment and privatization. This research project ends in July 1990, when the new Solidarity-backed government enacted a privatization law. The government decided to transform the property ownership structure through a centralized privatization process, which was a blow for supporters of workers’ self-management. This PhD thesis provides new insight into the evolution of the Solidarity union from 1980-1990 by analyzing the fate of workers’ self-management. This project also examines the role of the Network throughout the 1980s. There is analysis of the important link between workers’ self-management and the core ideas of Solidarity. In addition, the link between political and economic reform is an important theme in this research project. The Network was aware that authentic workers’ self-management required reforms to the authoritarian political system. Workers’ self-management competed against other politico-economic ideas during the 1980s in Poland. The outcome of this competition between different reform concepts has shaped modern-day Polish politics, economics and society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This document presents catalogue techniques used at network GDAC level to facilitate the discovery of platforms and data files. Some AtlantOS networks are organized as DAC-GDACs that continuously update a catalogue of metadata on observation datasets and platforms: • A DAC is a Data Assembly Centre operating at national or regional scale. It manages data and metadata for its area with a direct link to Scientifics and Operators. The DAC pushes observations to the network GDAC. • A GDAC is a Global Data Assembly Centre. It is designed for a global observation network such as Argo, OceanSITES, DBCP, EGO, Gosud, etc… The GDAC aggregates data and metadata of an observation network, in real-time and delayed mode, provided by DACs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several unmet needs have been identified in allergic rhinitis: identification of the time of onset of the pollen season, optimal control of rhinitis and comorbidities, patient stratification, multidisciplinary team for integrated care pathways, innovation in clinical trials and, above all, patient empowerment. MASK-rhinitis (MACVIA-ARIA Sentinel NetworK for allergic rhinitis) is a simple system centred around the patient which was devised to fill many of these gaps using Information and Communications Technology (ICT) tools and a clinical decision support system (CDSS) based on the most widely used guideline in allergic rhinitis and its asthma comorbidity (ARIA 2015 revision). It is one of the implementation systems of Action Plan B3 of the European Innovation Partnership on Active and Healthy Ageing (EIP on AHA). Three tools are used for the electronic monitoring of allergic diseases: a cell phone-based daily visual analogue scale (VAS) assessment of disease control, CARAT (Control of Allergic Rhinitis and Asthma Test) and e-Allergy screening (premedical system of early diagnosis of allergy and asthma based on online tools). These tools are combined with a clinical decision support system (CDSS) and are available in many languages. An e-CRF and an e-learning tool complete MASK. MASK is flexible and other tools can be added. It appears to be an advanced, global and integrated ICT answer for many unmet needs in allergic diseases which will improve policies and standards.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optimization of Carnobacterium divergens V41 growth and bacteriocin activity in a culture medium deprived of animal protein, needs for food bioprotection, was performed by using a statistical approach. In a screening experiment, twelve factors (pH, temperature, carbohydrates, NaCl, yeast extract, soy peptone, sodium acetate, ammonium citrate, magnesium sulphate, manganese sulphate, ascorbic acid and thiamine) were tested for their influence on the maximal growth and bacteriocin activity using a two-level incomplete factorial design with 192 experiments performed in microtiter plate wells. Based on results, a basic medium was developed and three variables (pH, temperature and carbohydrates concentration) were selected for a scale-up study in bioreactor. A 23 complete factorial design was performed, allowing the estimation of linear effects of factors and all the first order interactions. The best conditions for the cell production were obtained with a temperature of 15°C and a carbohydrates concentration of 20 g/l whatever the pH (in the range 6.5-8), and the best conditions for bacteriocin activity were obtained at 15°C and pH 6.5 whatever the carbohydrates concentration (in the range 2-20 g/l). The predicted final count of C. divergens V41 and the bacteriocin activity under the optimized conditions (15°C, pH 6.5, 20 g/l carbohydrates) were 2.4 x 1010 CFU/ml and 819200 AU/ml respectively. C. divergens V41 cells cultivated in the optimized conditions were able to grow in cold-smoked salmon and totally inhibited the growth of Listeria monocytogenes (< 50 CFU g-1) during five weeks of vacuum storage at 4° and 8°C.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deep-fat frying is susceptible to induce the formation of undesirable products as lipid oxidation products and acrylamide in fried foods. Plantain chips produced by small-scale producers are sold to consumers without any control. The objective of this study was to evaluate the quality of plantain chips from local producers in relation to production process parameters and oils, and to identify the limiting factors for the production of acrylamide in plantain chips. Samples of frying oils and plantain chips prepared with either palm olein or soybean oil were collected from 10 producers in Yaoundé. Quality parameters determined in this study were: fatty acid composition of the oils, determined by gas chromatography (GC) of free acid methyl ester; trans fatty acids, determined by Fourier transform infra-red spectroscopy; Tocopherols and tocotrienols as markers of nutritional quality were analyzed by High Performance Liquid Chromatography in isocratic mode. Free fatty acids and acylglycerols as markers of lipid hydrolysis were analyzed by GC of trimethylsilyl derivatives of glycerides. Conjugated dienes, Anisidine value and viscosity as markers of lipid oxidation and thermal decomposition of the oils; acrylamide which is formed through Maillard reaction and identified as a toxic compound in various fried products. Asparagine content of the raw fresh plantain powder was also determined. Fatty acid composition of palm oleins was stable within a day of intermittent frying. In soybean oils, about 57% and 62.5% of linoleic and linolenic acids were lost but trans fatty acids were not detected. Soybean oils were partly hydrolysed leading to the formation of free fatty acids, monoacylglycerols and diacylglycerols. In both oils, tocopherols and tocotrienols contents decreased significantly by about 50%. Anisidine value (AV) and polymers contents increased slightly in fried palm oleins while conjugated hydroperoxides, AV and polymers greatly increased in soybean oils. Acrylamide was not detected in the chips. This is explained by the absence of asparagine in the raw plantains, the other acrylamide precursors being present. This study shows that the plantain chips prepared at the small-scale level in Yaounde with palm olein are of good quality regarding oxidation and hydrolysis parameters and the absence of acrylamide. In contrast, oxidation developed with soybean oil whose usage for frying should be questioned. Considering that asparagine is the limiting factor for the formation of acrylamide in plantain chips, its content depending on several factors such as production parameters and maturity stage should be explored.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Long-term monitoring of data of ambient mercury (Hg) on a global scale to assess its emission, transport, atmospheric chemistry, and deposition processes is vital to understanding the impact of Hg pollution on the environment. The Global Mercury Observation System (GMOS) project was funded by the European Commission (http://www.gmos.eu) and started in November 2010 with the overall goal to develop a coordinated global observing system to monitor Hg on a global scale, including a large network of ground-based monitoring stations, ad hoc periodic oceanographic cruises and measurement flights in the lower and upper troposphere as well as in the lower stratosphere. To date, more than 40 ground-based monitoring sites constitute the global network covering many regions where little to no observational data were available before GMOS. This work presents atmospheric Hg concentrations recorded worldwide in the framework of the GMOS project (2010–2015), analyzing Hg measurement results in terms of temporal trends, seasonality and comparability within the network. Major findings highlighted in this paper include a clear gradient of Hg concentrations between the Northern and Southern hemispheres, confirming that the gradient observed is mostly driven by local and regional sources, which can be anthropogenic, natural or a combination of both.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multimetallic shape-controlled nanoparticles offer great opportunities to tune the activity, selectivity, and stability of electrocatalytic surface reactions. However, in many cases, our synthetic control over particle size, composition, and shape is limited requiring trial and error. Deeper atomic-scale insight in the particle formation process would enable more rational syntheses. Here we exemplify this using a family of trimetallic PtNiCo nanooctahedra obtained via a low-temperature, surfactant-free solvothermal synthesis. We analyze the competition between Ni and Co precursors under coreduction “one-step” conditions when the Ni reduction rates prevailed. To tune the Co reduction rate and final content, we develop a “two-step” route and track the evolution of the composition and morphology of the particles at the atomic scale. To achieve this, scanning transmission electron microscopy and energy dispersive X-ray elemental mapping techniques are used. We provide evidence of a heterogeneous element distribution caused by element-specific anisotropic growth and create octahedral nanoparticles with tailored atomic composition like Pt1.5M, PtM, and PtM1.5 (M = Ni + Co). These trimetallic electrocatalysts have been tested toward the oxygen reduction reaction (ORR), showing a greatly enhanced mass activity related to commercial Pt/C and less activity loss than binary PtNi and PtCo after 4000 potential cycles.