143 resultados para Baire Topology


Relevância:

10.00% 10.00%

Publicador:

Resumo:

WcaJ is an Escherichia coli membrane enzyme catalysing the biosynthesis of undecaprenyl-diphosphate-glucose, the first step in the assembly of colanic acid exopolysaccharide. WcaJ belongs to a large family of polyisoprenyl-phosphate hexose-1-phosphate transferases (PHPTs) sharing a similar predicted topology consisting of an N-terminal domain containing four transmembrane helices (TMHs), a large central periplasmic loop, and a C-terminal domain containing the fifth TMH (TMH-V) and a cytosolic tail. However, the topology of PHPTs has not been experimentally validated. Here, we investigated the topology of WcaJ using a combination of LacZ/PhoA reporter fusions and sulfhydryl
labelling by PEGylation of novel cysteine residues introduced into a cysteine-less WcaJ. The results showed that the large central loop and the C-terminal tail both reside in the cytoplasm and are separated by TMH-V, which does not fully span the membrane, likely forming a "hairpin" structure. Modelling of TMH-V revealed that a highly conserved proline might contribute to a helix-break-helix structure in all PHPT members. Bioinformatic analyses show that all of these features are conserved in PHPT homologues from
Gram-negative and Gram-positive bacteria. Our data demonstrate a novel topological configuration for PHPTs, which is proposed as a signature for all members of this enzyme family

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents new results for the (partial) maximum a posteriori (MAP) problem in Bayesian networks, which is the problem of querying the most probable state configuration of some of the network variables given evidence. It is demonstrated that the problem remains hard even in networks with very simple topology, such as binary polytrees and simple trees (including the Naive Bayes structure), which extends previous complexity results. Furthermore, a Fully Polynomial Time Approximation Scheme for MAP in networks with bounded treewidth and bounded number of states per variable is developed. Approximation schemes were thought to be impossible, but here it is shown otherwise under the assumptions just mentioned, which are adopted in most applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents new results for the (partial) maximum a posteriori (MAP) problem in Bayesian networks, which is the problem of querying the most probable state configuration of some of the network variables given evidence. First, it is demonstrated that the problem remains hard even in networks with very simple topology, such as binary polytrees and simple trees (including the Naive Bayes structure). Such proofs extend previous complexity results for the problem. Inapproximability results are also derived in the case of trees if the number of states per variable is not bounded. Although the problem is shown to be hard and inapproximable even in very simple scenarios, a new exact algorithm is described that is empirically fast in networks of bounded treewidth and bounded number of states per variable. The same algorithm is used as basis of a Fully Polynomial Time Approximation Scheme for MAP under such assumptions. Approximation schemes were generally thought to be impossible for this problem, but we show otherwise for classes of networks that are important in practice. The algorithms are extensively tested using some well-known networks as well as random generated cases to show their effectiveness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper strengthens the NP-hardness result for the (partial) maximum a posteriori (MAP) problem in Bayesian networks with topology of trees (every variable has at most one parent) and variable cardinality at most three. MAP is the problem of querying the most probable state configuration of some (not necessarily all) of the network variables given evidence. It is demonstrated that the problem remains hard even in such simplistic networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Brain tissue from so-called Alzheimer's disease (AD) mouse models has previously been examined using H-1 NMR-metabolomics, but comparable information concerning human AD is negligible. Since no animal model recapitulates all the features of human AD we undertook the first H-1 NMR-metabolomics investigation of human AD brain tissue. Human post-mortem tissue from 15 AD subjects and 15 age-matched controls was prepared for analysis through a series of lyophilised, milling, extraction and randomisation steps and samples were analysed using H-1 NMR. Using partial least squares discriminant analysis, a model was built using data obtained from brain extracts. Analysis of brain extracts led to the elucidation of 24 metabolites. Significant elevations in brain alanine (15.4 %) and taurine (18.9 %) were observed in AD patients (p ≤ 0.05). Pathway topology analysis implicated either dysregulation of taurine and hypotaurine metabolism or alanine, aspartate and glutamate metabolism. Furthermore, screening of metabolites for AD biomarkers demonstrated that individual metabolites weakly discriminated cases of AD [receiver operating characteristic (ROC) AUC <0.67; p < 0.05]. However, paired metabolites ratios (e.g. alanine/carnitine) were more powerful discriminating tools (ROC AUC = 0.76; p < 0.01). This study further demonstrates the potential of metabolomics for elucidating the underlying biochemistry and to help identify AD in patients attending the memory clinic

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Inspired by the commercial application of the Exechon machine, this paper proposed a novel parallel kinematic machine (PKM) named Exe-Variant. By exchanging the sequence of kinematic pairs in each limb of the Exechon machine, the Exe-Variant PKM claims an arrangement of 2UPR/1SPR topology and consists of two identical UPR limbs and one SPR limb. The inverse kinematics of the 2UPR/1SPR parallel mechanism was firstly analyzed based on which a conceptual design of the Exe-Variant was carried out. Then an algorithm of reachable workspace searching for the Exe-Variant and the Exchon was proposed. Finally, the workspaces of two example systems of the Exechon and the Exe-Variant with approximate dimensions were numerically simulated and compared. The comparison shows that the Exe-Variant possesses a competitive workspace with the Exechon machine, indicating it can be used as a promising reconfigurable module in a hybrid 5-DOF machine tool system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In order to carry out high-precision machining of aerospace structural components with large size, thin wall and complex surface, this paper proposes a novel parallel kinematic machine (PKM) and formulates its semi-analytical theoretical stiffness model considering gravitational effects that is verified by stiffness experiments. From the viewpoint of topology structure, the novel PKM consists of two substructures in terms of the redundant and overconstrained parallel mechanisms that are connected by two interlinked revolute joints. The theoretical stiffness model of the novel PKM is established based upon the virtual work principle and deformation superposition principle after mapping the stiffness models of substructures from joint space to operated space by Jacobian matrices and considering the deformation contributions of interlinked revolute joints to two substructures. Meanwhile, the component gravities are treated as external payloads exerting on the end reference point of the novel PKM resorting to static equivalence principle. This approach is proved by comparing the theoretical stiffness values with experimental stiffness values in the same configurations, which also indicates equivalent gravity can be employed to describe the actual distributed gravities in an acceptable accuracy manner. Finally, on the basis of the verified theoretical stiffness model, the stiffness distributions of the novel PKM are illustrated and the contributions of component gravities to the stiffness of the novel PKM are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The future European power system will have a hierarchical structure created by layers of system control from a Supergrid via regional high-voltage transmission through to medium and low-voltage distribution. Each level will have generation sources such as large-scale offshore wind, wave, solar thermal, nuclear directly connected to this Supergrid and high levels of embedded generation, connected to the medium-voltage distribution system. It is expected that the fuel portfolio will be dominated by offshore wind in Northern Europe and PV in Southern Europe. The strategies required to manage the coordination of supply-side variability with demand-side variability will include large scale interconnection, demand side management, load aggregation and storage in the context of the Supergrid combined with the Smart Grid. The design challenge associated with this will not only include control topology, data acquisition, analysis and communications technologies, but also the selection of fuel portfolio at a macro level. This paper quantifies the amount of demand side management, storage and so-called 'back-up generation' needed to support an 80% renewable energy portfolio in Europe by 2050. © 2013 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The power system of the future will have a hierarchical structure created by layers of system control from via regional high-voltage transmission through to medium and low-voltage distribution. Each level will have generation sources such as large-scale offshore wind, wave, solar thermal, nuclear directly connected to this Supergrid and high levels of embedded generation, connected to the medium-voltage distribution system. It is expected that the fuel portfolio will be dominated by offshore wind in Northern Europe and PV in Southern Europe. The strategies required to manage the coordination of supply-side variability with demand-side variability will include large scale interconnection, demand side management, load aggregation and storage in the concept of the Supergrid combined with the Smart Grid. The design challenge associated with this will not only include control topology, data acquisition, analysis and communications technologies, but also the selection of fuel portfolio at a macro level. This paper quantifies the amount of demand side management, storage and so-called ‘back-up generation’ needed to support an 80% renewable energy portfolio in Europe by 2050.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In multi-terminal high voltage direct current (HVDC) grids, the widely deployed droop control strategies will cause a non-uniform voltage deviation on the power flow, which is determined by the network topology and droop settings. This voltage deviation results in an inconsistent power flow pattern when the dispatch references are changed, which could be detrimental to the operation and seamless integration of HVDC grids. In this paper, a novel droop setting design method is proposed to address this problem for a more precise power dispatch. The effects of voltage deviations on the power sharing accuracy and transmission loss are analysed. This paper shows that there is a trade-off between minimizing the voltage deviation, ensuring a proper power delivery and reducing the total transmission loss in the droop setting design. The efficacy of the proposed method is confirmed by simulation studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a stressed-skin diaphragm approach to the optimal design of the internal frame of a cold-formed steel portal framing system, in conjunction with the effect of semi-rigid joints. Both ultimate and serviceability limit states are considered. Wind load combinations are included. The designs are optimized using a real-coded niching genetic algorithm, in which both discrete and continuous decision variables are processed. For a building with two internal frames, it is shown that the material cost of the internal frame can be reduced by as much as 53%, compared with a design that ignores stressed-skin action.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Models of complex systems with n components typically have order n<sup>2</sup> parameters because each component can potentially interact with every other. When it is impractical to measure these parameters, one may choose random parameter values and study the emergent statistical properties at the system level. Many influential results in theoretical ecology have been derived from two key assumptions: that species interact with random partners at random intensities and that intraspecific competition is comparable between species. Under these assumptions, community dynamics can be described by a community matrix that is often amenable to mathematical analysis. We combine empirical data with mathematical theory to show that both of these assumptions lead to results that must be interpreted with caution. We examine 21 empirically derived community matrices constructed using three established, independent methods. The empirically derived systems are more stable by orders of magnitude than results from random matrices. This consistent disparity is not explained by existing results on predator-prey interactions. We investigate the key properties of empirical community matrices that distinguish them from random matrices. We show that network topology is less important than the relationship between a species’ trophic position within the food web and its interaction strengths. We identify key features of empirical networks that must be preserved if random matrix models are to capture the features of real ecosystems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The design optimization of a cold-formed steel portal frame building is considered in this paper. The proposed genetic algorithm (GA) optimizer considers both topology (i.e., frame spacing and pitch) and cross-sectional sizes of the main structural members as the decision variables. Previous GAs in the literature were characterized by poor convergence, including slow progress, that usually results in excessive computation times and/or frequent failure to achieve an optimal or near-optimal solution. This is the main issue addressed in this paper. In an effort to improve the performance of the conventional GA, a niching strategy is presented that is shown to be an effective means of enhancing the dissimilarity of the solutions in each generation of the GA. Thus, population diversity is maintained and premature convergence is reduced significantly. Through benchmark examples, it is shown that the efficient GA proposed generates optimal solutions more consistently. A parametric study was carried out, and the results included. They show significant variation in the optimal topology in terms of pitch and frame spacing for a range of typical column heights. They also show that the optimized design achieved large savings based on the cost of the main structural elements; the inclusion of knee braces at the eaves yield further savings in cost, that are significant.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increasing complexity and scale of cloud computing environments due to widespread data centre heterogeneity makes measurement-based evaluations highly difficult to achieve. Therefore the use of simulation tools to support decision making in cloud computing environments to cope with this problem is an increasing trend. However the data required in order to model cloud computing environments with an appropriate degree of accuracy is typically large, very difficult to collect without some form of automation, often not available in a suitable format and a time consuming process if done manually. In this research, an automated method for cloud computing topology definition, data collection and model creation activities is presented, within the context of a suite of tools that have been developed and integrated to support these activities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Loss of species will directly change the structure and potentially the dynamics of ecological communities, which in turn may lead to additional species loss (secondary extinctions) due to direct and/or indirect effects (e.g. loss of resources or altered population dynamics). Furthermore, the vulnerability of food webs to repeated species loss is expected to be affected by food web topology, species interactions, as well as the order in which species go extinct. Species traits such as body size, abundance and connectivity might determine a species' vulnerability to extinction and, thus, the order in which species go primarily extinct. Yet, the sequence of primary extinctions, and their effects on the vulnerability of food webs to secondary extinctions, when species abundances are allowed to respond dynamically, has only recently become the focus of attention. Here, we analyse and compare topological and dynamical robustness to secondary extinctions of model food webs, in the face of 34 extinction sequences based on species traits. Although secondary extinctions are frequent in the dynamical approach and rare in the topological approach, topological and dynamical robustness tends to be correlated for many bottom-up directed, but not for top-down directed deletion sequences. Furthermore, removing species based on traits that are strongly positively correlated to the trophic position of species (such as large body size, low abundance, high net effect) is, under the dynamical approach, found to be as destructive as removing primary producers. Such top-down oriented removal of species are often considered to correspond to realistic extinction scenarios, but earlier studies, based on topological approaches, have found such extinction sequences to have only moderate effects on the remaining community. Thus, our result suggests that the structure of ecological communities, and therefore the integrity of important ecosystem processes could be more vulnerable to realistic extinction sequences than previously believed.