970 resultados para network measures
Resumo:
Operationalising and measuring the concept of globalisation is important, as the extent to which the international economy is integrated has a direct impact on industrial dynamics, national trade policies and firm strategies. Using complex systems network analysis with longitudinal trade data from 1938 to 2003, this paper presents a new way to measure globalisation. It demonstrates that some important aspects of the international trade network have been remarkably stable over this period. However, several network measures have changed substantially over the same time frame. Taken together, these analyses provide a novel measure of globalisation.
Resumo:
Many innovations are inspired by past ideas in a nontrivial way. Tracing these origins and identifying scientific branches is crucial for research inspirations. In this paper, we use citation relations to identify the descendant chart, i.e., the family tree of research papers. Unlike other spanning trees that focus on cost or distance minimization, we make use of the nature of citations and identify the most important parent for each publication, leading to a treelike backbone of the citation network. Measures are introduced to validate the backbone as the descendant chart. We show that citation backbones can well characterize the hierarchical and fractal structure of scientific development, and lead to an accurate classification of fields and subfields. © 2011 American Physical Society.
Resumo:
The purpose of this study is to compare the inferability of various synthetic as well as real biological regulatory networks. In order to assess differences we apply local network-based measures. That means, instead of applying global measures, we investigate and assess an inference algorithm locally, on the level of individual edges and subnetworks. We demonstrate the behaviour of our local network-based measures with respect to different regulatory networks by conducting large-scale simulations. As inference algorithm we use exemplarily ARACNE. The results from our exploratory analysis allow us not only to gain new insights into the strength and weakness of an inference algorithm with respect to characteristics of different regulatory networks, but also to obtain information that could be used to design novel problem-specific statistical estimators.
Resumo:
Obesity has been linked with elevated levels of C-reactive protein (CRP), and both have been associated with increased risk of mortality and cardiovascular disease (CVD). Previous studies have used a single ‘baseline’ measurement and such analyses cannot account for possible changes in these which may lead to a biased estimation of risk. Using four cohorts from CHANCES which had repeated measures in participants 50 years and older, multivariate time-dependent Cox proportional hazards was used to estimate hazard ratios (HR) and 95 % confidence intervals (CI) to examine the relationship between body mass index (BMI) and CRP with all-cause mortality and CVD. Being overweight (≥25–<30 kg/m2) or moderately obese (≥30–<35) tended to be associated with a lower risk of mortality compared to normal (≥18.5–<25): ESTHER, HR (95 % CI) 0.69 (0.58–0.82) and 0.78 (0.63–0.97); Rotterdam, 0.86 (0.79–0.94) and 0.80 (0.72–0.89). A similar relationship was found, but only for overweight in Glostrup, HR (95 % CI) 0.88 (0.76–1.02); and moderately obese in Tromsø, HR (95 % CI) 0.79 (0.62–1.01). Associations were not evident between repeated measures of BMI and CVD. Conversely, increasing CRP concentrations, measured on more than one occasion, were associated with an increasing risk of mortality and CVD. Being overweight or moderately obese is associated with a lower risk of mortality, while CRP, independent of BMI, is positively associated with mortality and CVD risk. If inflammation links CRP and BMI, they may participate in distinct/independent pathways. Accounting for independent changes in risk factors over time may be crucial for unveiling their effects on mortality and disease morbidity.
Resumo:
In an evermore competitive environment, power distribution companies need to continuously monitor and improve the reliability indices of their systems. The network reconfiguration (NR) of a distribution system is a technique that well adapts to this new deregulated environment for it allows improvement of system reliability indices without the onus involved in procuring new equipment. This paper presents a reliability-based NR methodology that uses metaheuristic techniques to search for the optimal network configuration. Three metaheuristics, i.e. Tabu Search, Evolution Strategy, and Differential Evolution, are tested using a Brazilian distribution network and the results are discussed. © 2009 IEEE.
Resumo:
Aberrant behavior of biological signaling pathways has been implicated in diseases such as cancers. Therapies have been developed to target proteins in these networks in the hope of curing the illness or bringing about remission. However, identifying targets for drug inhibition that exhibit good therapeutic index has proven to be challenging since signaling pathways have a large number of components and many interconnections such as feedback, crosstalk, and divergence. Unfortunately, some characteristics of these pathways such as redundancy, feedback, and drug resistance reduce the efficacy of single drug target therapy and necessitate the employment of more than one drug to target multiple nodes in the system. However, choosing multiple targets with high therapeutic index poses more challenges since the combinatorial search space could be huge. To cope with the complexity of these systems, computational tools such as ordinary differential equations have been used to successfully model some of these pathways. Regrettably, for building these models, experimentally-measured initial concentrations of the components and rates of reactions are needed which are difficult to obtain, and in very large networks, they may not be available at the moment. Fortunately, there exist other modeling tools, though not as powerful as ordinary differential equations, which do not need the rates and initial conditions to model signaling pathways. Petri net and graph theory are among these tools. In this thesis, we introduce a methodology based on Petri net siphon analysis and graph network centrality measures for identifying prospective targets for single and multiple drug therapies. In this methodology, first, potential targets are identified in the Petri net model of a signaling pathway using siphon analysis. Then, the graph-theoretic centrality measures are employed to prioritize the candidate targets. Also, an algorithm is developed to check whether the candidate targets are able to disable the intended outputs in the graph model of the system or not. We implement structural and dynamical models of ErbB1-Ras-MAPK pathways and use them to assess and evaluate this methodology. The identified drug-targets, single and multiple, correspond to clinically relevant drugs. Overall, the results suggest that this methodology, using siphons and centrality measures, shows promise in identifying and ranking drugs. Since this methodology only uses the structural information of the signaling pathways and does not need initial conditions and dynamical rates, it can be utilized in larger networks.
Resumo:
CCTV and surveillance networks are increasingly being used for operational as well as security tasks. One emerging area of technology that lends itself to operational analytics is soft biometrics. Soft biometrics can be used to describe a person and detect them throughout a sparse multi-camera network. This enables them to be used to perform tasks such as determining the time taken to get from point to point, and the paths taken through an environment by detecting and matching people across disjoint views. However, in a busy environment where there are 100's if not 1000's of people such as an airport, attempting to monitor everyone is highly unrealistic. In this paper we propose an average soft biometric, that can be used to identity people who look distinct, and are thus suitable for monitoring through a large, sparse camera network. We demonstrate how an average soft biometric can be used to identify unique people to calculate operational measures such as the time taken to travel from point to point.
Resumo:
Networked control systems (NCSs) offer many advantages over conventional control; however, they also demonstrate challenging problems such as network-induced delay and packet losses. This paper proposes an approach of predictive compensation for simultaneous network-induced delays and packet losses. Different from the majority of existing NCS control methods, the proposed approach addresses co-design of both network and controller. It also alleviates the requirements of precise process models and full understanding of NCS network dynamics. For a series of possible sensor-to-actuator delays, the controller computes a series of corresponding redundant control values. Then, it sends out those control values in a single packet to the actuator. Once receiving the control packet, the actuator measures the actual sensor-to-actuator delay and computes the control signals from the control packet. When packet dropout occurs, the actuator utilizes past control packets to generate an appropriate control signal. The effectiveness of the approach is demonstrated through examples.