969 resultados para Tychonoff topology
Resumo:
Brain tissue from so-called Alzheimer's disease (AD) mouse models has previously been examined using H-1 NMR-metabolomics, but comparable information concerning human AD is negligible. Since no animal model recapitulates all the features of human AD we undertook the first H-1 NMR-metabolomics investigation of human AD brain tissue. Human post-mortem tissue from 15 AD subjects and 15 age-matched controls was prepared for analysis through a series of lyophilised, milling, extraction and randomisation steps and samples were analysed using H-1 NMR. Using partial least squares discriminant analysis, a model was built using data obtained from brain extracts. Analysis of brain extracts led to the elucidation of 24 metabolites. Significant elevations in brain alanine (15.4 %) and taurine (18.9 %) were observed in AD patients (p ≤ 0.05). Pathway topology analysis implicated either dysregulation of taurine and hypotaurine metabolism or alanine, aspartate and glutamate metabolism. Furthermore, screening of metabolites for AD biomarkers demonstrated that individual metabolites weakly discriminated cases of AD [receiver operating characteristic (ROC) AUC <0.67; p < 0.05]. However, paired metabolites ratios (e.g. alanine/carnitine) were more powerful discriminating tools (ROC AUC = 0.76; p < 0.01). This study further demonstrates the potential of metabolomics for elucidating the underlying biochemistry and to help identify AD in patients attending the memory clinic
Resumo:
Inspired by the commercial application of the Exechon machine, this paper proposed a novel parallel kinematic machine (PKM) named Exe-Variant. By exchanging the sequence of kinematic pairs in each limb of the Exechon machine, the Exe-Variant PKM claims an arrangement of 2UPR/1SPR topology and consists of two identical UPR limbs and one SPR limb. The inverse kinematics of the 2UPR/1SPR parallel mechanism was firstly analyzed based on which a conceptual design of the Exe-Variant was carried out. Then an algorithm of reachable workspace searching for the Exe-Variant and the Exchon was proposed. Finally, the workspaces of two example systems of the Exechon and the Exe-Variant with approximate dimensions were numerically simulated and compared. The comparison shows that the Exe-Variant possesses a competitive workspace with the Exechon machine, indicating it can be used as a promising reconfigurable module in a hybrid 5-DOF machine tool system.
Resumo:
In order to carry out high-precision machining of aerospace structural components with large size, thin wall and complex surface, this paper proposes a novel parallel kinematic machine (PKM) and formulates its semi-analytical theoretical stiffness model considering gravitational effects that is verified by stiffness experiments. From the viewpoint of topology structure, the novel PKM consists of two substructures in terms of the redundant and overconstrained parallel mechanisms that are connected by two interlinked revolute joints. The theoretical stiffness model of the novel PKM is established based upon the virtual work principle and deformation superposition principle after mapping the stiffness models of substructures from joint space to operated space by Jacobian matrices and considering the deformation contributions of interlinked revolute joints to two substructures. Meanwhile, the component gravities are treated as external payloads exerting on the end reference point of the novel PKM resorting to static equivalence principle. This approach is proved by comparing the theoretical stiffness values with experimental stiffness values in the same configurations, which also indicates equivalent gravity can be employed to describe the actual distributed gravities in an acceptable accuracy manner. Finally, on the basis of the verified theoretical stiffness model, the stiffness distributions of the novel PKM are illustrated and the contributions of component gravities to the stiffness of the novel PKM are discussed.
Resumo:
The future European power system will have a hierarchical structure created by layers of system control from a Supergrid via regional high-voltage transmission through to medium and low-voltage distribution. Each level will have generation sources such as large-scale offshore wind, wave, solar thermal, nuclear directly connected to this Supergrid and high levels of embedded generation, connected to the medium-voltage distribution system. It is expected that the fuel portfolio will be dominated by offshore wind in Northern Europe and PV in Southern Europe. The strategies required to manage the coordination of supply-side variability with demand-side variability will include large scale interconnection, demand side management, load aggregation and storage in the context of the Supergrid combined with the Smart Grid. The design challenge associated with this will not only include control topology, data acquisition, analysis and communications technologies, but also the selection of fuel portfolio at a macro level. This paper quantifies the amount of demand side management, storage and so-called 'back-up generation' needed to support an 80% renewable energy portfolio in Europe by 2050. © 2013 IEEE.
Resumo:
The power system of the future will have a hierarchical structure created by layers of system control from via regional high-voltage transmission through to medium and low-voltage distribution. Each level will have generation sources such as large-scale offshore wind, wave, solar thermal, nuclear directly connected to this Supergrid and high levels of embedded generation, connected to the medium-voltage distribution system. It is expected that the fuel portfolio will be dominated by offshore wind in Northern Europe and PV in Southern Europe. The strategies required to manage the coordination of supply-side variability with demand-side variability will include large scale interconnection, demand side management, load aggregation and storage in the concept of the Supergrid combined with the Smart Grid. The design challenge associated with this will not only include control topology, data acquisition, analysis and communications technologies, but also the selection of fuel portfolio at a macro level. This paper quantifies the amount of demand side management, storage and so-called ‘back-up generation’ needed to support an 80% renewable energy portfolio in Europe by 2050.
Resumo:
In multi-terminal high voltage direct current (HVDC) grids, the widely deployed droop control strategies will cause a non-uniform voltage deviation on the power flow, which is determined by the network topology and droop settings. This voltage deviation results in an inconsistent power flow pattern when the dispatch references are changed, which could be detrimental to the operation and seamless integration of HVDC grids. In this paper, a novel droop setting design method is proposed to address this problem for a more precise power dispatch. The effects of voltage deviations on the power sharing accuracy and transmission loss are analysed. This paper shows that there is a trade-off between minimizing the voltage deviation, ensuring a proper power delivery and reducing the total transmission loss in the droop setting design. The efficacy of the proposed method is confirmed by simulation studies.
Resumo:
This paper describes a stressed-skin diaphragm approach to the optimal design of the internal frame of a cold-formed steel portal framing system, in conjunction with the effect of semi-rigid joints. Both ultimate and serviceability limit states are considered. Wind load combinations are included. The designs are optimized using a real-coded niching genetic algorithm, in which both discrete and continuous decision variables are processed. For a building with two internal frames, it is shown that the material cost of the internal frame can be reduced by as much as 53%, compared with a design that ignores stressed-skin action.
Resumo:
Models of complex systems with n components typically have order n<sup>2</sup> parameters because each component can potentially interact with every other. When it is impractical to measure these parameters, one may choose random parameter values and study the emergent statistical properties at the system level. Many influential results in theoretical ecology have been derived from two key assumptions: that species interact with random partners at random intensities and that intraspecific competition is comparable between species. Under these assumptions, community dynamics can be described by a community matrix that is often amenable to mathematical analysis. We combine empirical data with mathematical theory to show that both of these assumptions lead to results that must be interpreted with caution. We examine 21 empirically derived community matrices constructed using three established, independent methods. The empirically derived systems are more stable by orders of magnitude than results from random matrices. This consistent disparity is not explained by existing results on predator-prey interactions. We investigate the key properties of empirical community matrices that distinguish them from random matrices. We show that network topology is less important than the relationship between a species’ trophic position within the food web and its interaction strengths. We identify key features of empirical networks that must be preserved if random matrix models are to capture the features of real ecosystems.
Resumo:
The design optimization of a cold-formed steel portal frame building is considered in this paper. The proposed genetic algorithm (GA) optimizer considers both topology (i.e., frame spacing and pitch) and cross-sectional sizes of the main structural members as the decision variables. Previous GAs in the literature were characterized by poor convergence, including slow progress, that usually results in excessive computation times and/or frequent failure to achieve an optimal or near-optimal solution. This is the main issue addressed in this paper. In an effort to improve the performance of the conventional GA, a niching strategy is presented that is shown to be an effective means of enhancing the dissimilarity of the solutions in each generation of the GA. Thus, population diversity is maintained and premature convergence is reduced significantly. Through benchmark examples, it is shown that the efficient GA proposed generates optimal solutions more consistently. A parametric study was carried out, and the results included. They show significant variation in the optimal topology in terms of pitch and frame spacing for a range of typical column heights. They also show that the optimized design achieved large savings based on the cost of the main structural elements; the inclusion of knee braces at the eaves yield further savings in cost, that are significant.
Resumo:
The increasing complexity and scale of cloud computing environments due to widespread data centre heterogeneity makes measurement-based evaluations highly difficult to achieve. Therefore the use of simulation tools to support decision making in cloud computing environments to cope with this problem is an increasing trend. However the data required in order to model cloud computing environments with an appropriate degree of accuracy is typically large, very difficult to collect without some form of automation, often not available in a suitable format and a time consuming process if done manually. In this research, an automated method for cloud computing topology definition, data collection and model creation activities is presented, within the context of a suite of tools that have been developed and integrated to support these activities.
Resumo:
Loss of species will directly change the structure and potentially the dynamics of ecological communities, which in turn may lead to additional species loss (secondary extinctions) due to direct and/or indirect effects (e.g. loss of resources or altered population dynamics). Furthermore, the vulnerability of food webs to repeated species loss is expected to be affected by food web topology, species interactions, as well as the order in which species go extinct. Species traits such as body size, abundance and connectivity might determine a species' vulnerability to extinction and, thus, the order in which species go primarily extinct. Yet, the sequence of primary extinctions, and their effects on the vulnerability of food webs to secondary extinctions, when species abundances are allowed to respond dynamically, has only recently become the focus of attention. Here, we analyse and compare topological and dynamical robustness to secondary extinctions of model food webs, in the face of 34 extinction sequences based on species traits. Although secondary extinctions are frequent in the dynamical approach and rare in the topological approach, topological and dynamical robustness tends to be correlated for many bottom-up directed, but not for top-down directed deletion sequences. Furthermore, removing species based on traits that are strongly positively correlated to the trophic position of species (such as large body size, low abundance, high net effect) is, under the dynamical approach, found to be as destructive as removing primary producers. Such top-down oriented removal of species are often considered to correspond to realistic extinction scenarios, but earlier studies, based on topological approaches, have found such extinction sequences to have only moderate effects on the remaining community. Thus, our result suggests that the structure of ecological communities, and therefore the integrity of important ecosystem processes could be more vulnerable to realistic extinction sequences than previously believed.
Resumo:
This paper employs a unique extension-decomposition-aggregation (EDA) scheme to solve the formation flight control problem for multiple unmanned aerial vehicles (UAVs). The corresponding decentralised longitudinal and lateral formation autopilots are novelly designed to maintain the overall formation stability when encountering changes of the formation error and topologies. The concept of propagation layer number (PLN) is also proposed to provide an intuitive criterion to judge which type of formation topology is more suitable to minimise formation error propagation (FEP). The criterion states that the smaller the PLN of the formation is, the quicker the response to the formation error is. A smaller PLN also means that the resulting topology provides better prevention to the FEP. Simulation studies of formation flight of multiple Aerosonde UAVs demonstrate that the designed formation controller based on the EDA strategy performs satisfactorily in maintaining the overall formation stable, and the bidirectional partial-mesh topology is found to provide the best overall response to the formation error propagation based on the PLN criterion.
Resumo:
Motivated by the need for designing efficient and robust fully-distributed computation in highly dynamic networks such as Peer-to-Peer (P2P) networks, we study distributed protocols for constructing and maintaining dynamic network topologies with good expansion properties. Our goal is to maintain a sparse (bounded degree) expander topology despite heavy {\em churn} (i.e., nodes joining and leaving the network continuously over time). We assume that the churn is controlled by an adversary that has complete knowledge and control of what nodes join and leave and at what time and has unlimited computational power, but is oblivious to the random choices made by the algorithm. Our main contribution is a randomized distributed protocol that guarantees with high probability the maintenance of a {\em constant} degree graph with {\em high expansion} even under {\em continuous high adversarial} churn. Our protocol can tolerate a churn rate of up to $O(n/\poly\log(n))$ per round (where $n$ is the stable network size). Our protocol is efficient, lightweight, and scalable, and it incurs only $O(\poly\log(n))$ overhead for topology maintenance: only polylogarithmic (in $n$) bits needs to be processed and sent by each node per round and any node's computation cost per round is also polylogarithmic. The given protocol is a fundamental ingredient that is needed for the design of efficient fully-distributed algorithms for solving fundamental distributed computing problems such as agreement, leader election, search, and storage in highly dynamic P2P networks and enables fast and scalable algorithms for these problems that can tolerate a large amount of churn.
Resumo:
We study the fundamental Byzantine leader election problem in dynamic networks where the topology can change from round to round and nodes can also experience heavy {\em churn} (i.e., nodes can join and leave the network continuously over time). We assume the full information model where the Byzantine nodes have complete knowledge about the entire state of the network at every round (including random choices made by all the nodes), have unbounded computational power and can deviate arbitrarily from the protocol. The churn is controlled by an adversary that has complete knowledge and control over which nodes join and leave and at what times and also may rewire the topology in every round and has unlimited computational power, but is oblivious to the random choices made by the algorithm. Our main contribution is an $O(\log^3 n)$ round algorithm that achieves Byzantine leader election under the presence of up to $O({n}^{1/2 - \epsilon})$ Byzantine nodes (for a small constant $\epsilon > 0$) and a churn of up to \\$O(\sqrt{n}/\poly\log(n))$ nodes per round (where $n$ is the stable network size).The algorithm elects a leader with probability at least $1-n^{-\Omega(1)}$ and guarantees that it is an honest node with probability at least $1-n^{-\Omega(1)}$; assuming the algorithm succeeds, the leader's identity will be known to a $1-o(1)$ fraction of the honest nodes. Our algorithm is fully-distributed, lightweight, and is simple to implement. It is also scalable, as it runs in polylogarithmic (in $n$) time and requires nodes to send and receive messages of only polylogarithmic size per round.To the best of our knowledge, our algorithm is the first scalable solution for Byzantine leader election in a dynamic network with a high rate of churn; our protocol can also be used to solve Byzantine agreement in a straightforward way.We also show how to implement an (almost-everywhere) public coin with constant bias in a dynamic network with Byzantine nodes and provide a mechanism for enabling honest nodes to store information reliably in the network, which might be of independent interest.
Resumo:
We present a fully-distributed self-healing algorithm dex that maintains a constant degree expander network in a dynamic setting. To the best of our knowledge, our algorithm provides the first efficient distributed construction of expanders—whose expansion properties holddeterministically—that works even under an all-powerful adaptive adversary that controls the dynamic changes to the network (the adversary has unlimited computational power and knowledge of the entire network state, can decide which nodes join and leave and at what time, and knows the past random choices made by the algorithm). Previous distributed expander constructions typically provide only probabilistic guarantees on the network expansion whichrapidly degrade in a dynamic setting; in particular, the expansion properties can degrade even more rapidly under adversarial insertions and deletions. Our algorithm provides efficient maintenance and incurs a low overhead per insertion/deletion by an adaptive adversary: only O(logn)O(logn) rounds and O(logn)O(logn) messages are needed with high probability (n is the number of nodes currently in the network). The algorithm requires only a constant number of topology changes. Moreover, our algorithm allows for an efficient implementation and maintenance of a distributed hash table on top of dex with only a constant additional overhead. Our results are a step towards implementing efficient self-healing networks that have guaranteed properties (constant bounded degree and expansion) despite dynamic changes.
Gopal Pandurangan has been supported in part by Nanyang Technological University Grant M58110000, Singapore Ministry of Education (MOE) Academic Research Fund (AcRF) Tier 2 Grant MOE2010-T2-2-082, MOE AcRF Tier 1 Grant MOE2012-T1-001-094, and the United States-Israel Binational Science Foundation (BSF) Grant 2008348. Peter Robinson has been supported by Grant MOE2011-T2-2-042 “Fault-tolerant Communication Complexity in Wireless Networks” from the Singapore MoE AcRF-2. Work done in part while the author was at the Nanyang Technological University and at the National University of Singapore. Amitabh Trehan has been supported by the Israeli Centers of Research Excellence (I-CORE) program (Center No. 4/11). Work done in part while the author was at Hebrew University of Jerusalem and at the Technion and supported by a Technion fellowship.