994 resultados para Optimum-Path Forest classifier


Relevância:

20.00% 20.00%

Publicador:

Resumo:

An inventory of isolated tree stands surrounded by desert pastures in Southern Tibet (A.R. Xizang, China) revealed more than 50 sites with vigorous trees of Juniperus convallium Rehder & E.H. Wilson and Juniperus tibetica Kom and additional more than 10 records where juniper trees had been destroyed between 1959-1976. The tree stands are not restricted to any specific habitat, and occur within an area stretching 650 km westwards from the current forest border of Southern Tibet. The trees are religious landmarks of the Tibetan Buddhists. The highest trees were found at an elevation of 4,860 m. Vegetation records, rainfall correlations and temperature data collected by local climate stations and successful reforestation trials since 1999 indicate that forest relicts fragmented through human interference could regenerate if current cattle grazing and deforestation practices are halted. The drought line of Juniperus forests in Southern Tibet is approximately 200-250 mm/a. A first pollen diagram from Lhasa shows forest decline associated with the presence of humans since at least 4,600 yr BP. The currently degraded commons developed in the last 600 yr. To date, no findings of remains of ancient forests in the Central Tibetan Highlands of the Changtang have been reported.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

研究了不确定性环境下移动机器人躲避运动轨迹未知的移动障碍物的一种新方法.通过实时最小均方误差估计算法预测每个障碍物的位置及运动轨迹,并利用模式识别中最小均方误差分类器的修正模型计算出机器人的局部避障路径,再运用船舶导航中使用的操纵盘技术来确定每个导航周期中移动机器人的速.度仿真结果表明了该方法的可行性

Relevância:

20.00% 20.00%

Publicador:

Resumo:

On the basis of analyzing the principle and realization of geo-steering drilling system, the key technologies and methods in it are systematically studied in this paper. In order to recognize lithology, distinguish stratum and track reservoirs, the techniques of MWD and data process about natural gamma, resistivity, inductive density and porosity are researched. The methods for pre-processing and standardizing MWD data and for converting geological data in directional and horizontal drilling are discussed, consequently the methods of data conversion between MD and TVD and those of formation description and adjacent well contrast are proposed. Researching the method of identifying sub-layer yields the techniques of single well explanation, multi-well evaluation and oil reservoir description. Using the extremum and variance clustering analysis realizes logging phase analysis and stratum subdivision and explanation, which provides a theoretical method and lays a technical basis for tracing oil reservoirs and achieving geo-steering drilling. Researching the technique for exploring the reservoir top with a holdup section provides a planning method of wellpath control scheme to trace oil and gas reservoir dynamically, which solves the problem of how to control well trajectory on condition that the layer’s TVD is uncertain. The control scheme and planning method of well path for meeting the demands of target hitting, soft landing and continuous steering respectively provide the technological guarantee to land safely and drill successfully for horizontal, extended-reach and multi-target wells. The integrative design and control technologies are researched based on geology, reservoir and drilling considering reservoir disclosing ratio as a primary index, and the methods for planning and control optimum wellpath under multi-target restriction, thus which lets the target wellpath lie the favorite position in oil reservoir during the process of geo-steering drilling. The BHA (bottomhole assembly) mechanical model is discussed using the finite element method, and the BHA design methods are given on the basis of mechanical analyses according to the shape of well trajectory and the characteristics of BHA’s structure and deformation. The methods for predicting the deflection rate of bent housing motors and designing their assemblies are proposed based on the principle of minimum potential energy, which can clearly show the relation between the BHA’s structure parameters and deflection rate, especially the key factors’ effect to the deflection rate. Moreover, the interaction model between bit and formation is discussed through the process of equivalent formation and equivalent bit considering the formation anisotropy and bit anisotropy on the basis of analyzing the influence factors of well trajectory. Accordingly, the inherence relationship among well trajectory, formation, bit and drilling direction is revealed, which lays the theory basis and technique for predicting and controlling well trajectory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pd-supported on WO3-ZrO2 (W/Zr atomic ratio=0.2) calcined at 1073 K was found to be highly active and selective for gas-phase oxidation of ethylene to acetic acid in the presence of water at 423 K and 0.6 MPa. Contact time dependence demonstrated that acetic acid is formed via acetaldehyde formed by a Wacker-type reaction, not through ethanol by hydration of ethylene.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

How rainfall infiltration rate and soil hydrological characteristics develop over time under forests of different ages in temperate regions is poorly understood. In this study, infiltration rate and soil hydrological characteristics were investigated under forests of different ages and under grassland. Soil hydraulic characteristics were measured at different scales under a 250 year old grazed grassland (GL), a six (6 yr) and 48 (48 yr) year old Scots pine (Pinus sylvestris) plantation, remnant 300 year old individual Scots pines (OT) and a 4000 year old Caledonian Forest (AF). In-situ field saturated hydraulic conductivity (Kfs) was measured and visible root:soil area was estimated from soil pits. Macroporosity, pore structure, and macropore connectivity were estimated from X-ray tomography of soil cores, and from water-release characteristics. At all scales the median values for Kfs, root fraction, macro-porosity and connectivity values tended to AF > OT > 48 yr > GL > 6 yr, indicating that infiltration rates and water storage increased with forest age. The remnant Caledonian Forest had a huge range of Kfs (12 to > 4922 mm h-1), with maximum Kfs values 7 to 15 times larger than 48-year-old Scots pine plantation, suggesting that undisturbed old forests, with high rainfall and minimal evapotranspiration in winter, may act as important areas for water storage and sinks for storm rainfall to infiltrate and transport to deeper soil layers via preferential flow. The importance of the development of soil hydrological characteristics under different aged forests is discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND:In the current climate of high-throughput computational biology, the inference of a protein's function from related measurements, such as protein-protein interaction relations, has become a canonical task. Most existing technologies pursue this task as a classification problem, on a term-by-term basis, for each term in a database, such as the Gene Ontology (GO) database, a popular rigorous vocabulary for biological functions. However, ontology structures are essentially hierarchies, with certain top to bottom annotation rules which protein function predictions should in principle follow. Currently, the most common approach to imposing these hierarchical constraints on network-based classifiers is through the use of transitive closure to predictions.RESULTS:We propose a probabilistic framework to integrate information in relational data, in the form of a protein-protein interaction network, and a hierarchically structured database of terms, in the form of the GO database, for the purpose of protein function prediction. At the heart of our framework is a factorization of local neighborhood information in the protein-protein interaction network across successive ancestral terms in the GO hierarchy. We introduce a classifier within this framework, with computationally efficient implementation, that produces GO-term predictions that naturally obey a hierarchical 'true-path' consistency from root to leaves, without the need for further post-processing.CONCLUSION:A cross-validation study, using data from the yeast Saccharomyces cerevisiae, shows our method offers substantial improvements over both standard 'guilt-by-association' (i.e., Nearest-Neighbor) and more refined Markov random field methods, whether in their original form or when post-processed to artificially impose 'true-path' consistency. Further analysis of the results indicates that these improvements are associated with increased predictive capabilities (i.e., increased positive predictive value), and that this increase is consistent uniformly with GO-term depth. Additional in silico validation on a collection of new annotations recently added to GO confirms the advantages suggested by the cross-validation study. Taken as a whole, our results show that a hierarchical approach to network-based protein function prediction, that exploits the ontological structure of protein annotation databases in a principled manner, can offer substantial advantages over the successive application of 'flat' network-based methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

(This Technical Report revises TR-BUCS-2003-011) The Transmission Control Protocol (TCP) has been the protocol of choice for many Internet applications requiring reliable connections. The design of TCP has been challenged by the extension of connections over wireless links. In this paper, we investigate a Bayesian approach to infer at the source host the reason of a packet loss, whether congestion or wireless transmission error. Our approach is "mostly" end-to-end since it requires only one long-term average quantity (namely, long-term average packet loss probability over the wireless segment) that may be best obtained with help from the network (e.g. wireless access agent).Specifically, we use Maximum Likelihood Ratio tests to evaluate TCP as a classifier of the type of packet loss. We study the effectiveness of short-term classification of packet errors (congestion vs. wireless), given stationary prior error probabilities and distributions of packet delays conditioned on the type of packet loss (measured over a larger time scale). Using our Bayesian-based approach and extensive simulations, we demonstrate that congestion-induced losses and losses due to wireless transmission errors produce sufficiently different statistics upon which an efficient online error classifier can be built. We introduce a simple queueing model to underline the conditional delay distributions arising from different kinds of packet losses over a heterogeneous wired/wireless path. We show how Hidden Markov Models (HMMs) can be used by a TCP connection to infer efficiently conditional delay distributions. We demonstrate how estimation accuracy is influenced by different proportions of congestion versus wireless losses and penalties on incorrect classification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Accurate measurement of network bandwidth is crucial for flexible Internet applications and protocols which actively manage and dynamically adapt to changing utilization of network resources. These applications must do so to perform tasks such as distributing and delivering high-bandwidth media, scheduling service requests and performing admission control. Extensive work has focused on two approaches to measuring bandwidth: measuring it hop-by-hop, and measuring it end-to-end along a path. Unfortunately, best-practice techniques for the former are inefficient and techniques for the latter are only able to observe bottlenecks visible at end-to-end scope. In this paper, we develop and simulate end-to-end probing methods which can measure bottleneck bandwidth along arbitrary, targeted subpaths of a path in the network, including subpaths shared by a set of flows. As another important contribution, we describe a number of practical applications which we foresee as standing to benefit from solutions to this problem, especially in emerging, flexible network architectures such as overlay networks, ad-hoc networks, peer-to-peer architectures and massively accessed content servers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the problem of preprocessing a large graph so that point-to-point shortest-path queries can be answered very fast. Computing shortest paths is a well studied problem, but exact algorithms do not scale to huge graphs encountered on the web, social networks, and other applications. In this paper we focus on approximate methods for distance estimation, in particular using landmark-based distance indexing. This approach involves selecting a subset of nodes as landmarks and computing (offline) the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, we can estimate it quickly by combining the precomputed distances of the two nodes to the landmarks. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the suggested techniques is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach in the literature which considers selecting landmarks at random. Finally, we study applications of our method in two problems arising naturally in large-scale networks, namely, social search and community detection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we introduce the Generalized Equality Classifier (GEC) for use as an unsupervised clustering algorithm in categorizing analog data. GEC is based on a formal definition of inexact equality originally developed for voting in fault tolerant software applications. GEC is defined using a metric space framework. The only parameter in GEC is a scalar threshold which defines the approximate equality of two patterns. Here, we compare the characteristics of GEC to the ART2-A algorithm (Carpenter, Grossberg, and Rosen, 1991). In particular, we show that GEC with the Hamming distance performs the same optimization as ART2. Moreover, GEC has lower computational requirements than AR12 on serial machines.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Political drivers such as the Kyoto protocol, the EU Energy Performance of Buildings Directive and the Energy end use and Services Directive have been implemented in response to an identified need for a reduction in human related CO2 emissions. Buildings account for a significant portion of global CO2 emissions, approximately 25-30%, and it is widely acknowledged by industry and research organisations that they operate inefficiently. In parallel, unsatisfactory indoor environmental conditions have proven to negatively impact occupant productivity. Legislative drivers and client education are seen as the key motivating factors for an improvement in the holistic environmental and energy performance of a building. A symbiotic relationship exists between building indoor environmental conditions and building energy consumption. However traditional Building Management Systems and Energy Management Systems treat these separately. Conventional performance analysis compares building energy consumption with a previously recorded value or with the consumption of a similar building and does not recognise the fact that all buildings are unique. Therefore what is required is a new framework which incorporates performance comparison against a theoretical building specific ideal benchmark. Traditionally Energy Managers, who work at the operational level of organisations with respect to building performance, do not have access to ideal performance benchmark information and as a result cannot optimally operate buildings. This thesis systematically defines Holistic Environmental and Energy Management and specifies the Scenario Modelling Technique which in turn uses an ideal performance benchmark. The holistic technique uses quantified expressions of building performance and by doing so enables the profiled Energy Manager to visualise his actions and the downstream consequences of his actions in the context of overall building operation. The Ideal Building Framework facilitates the use of this technique by acting as a Building Life Cycle (BLC) data repository through which ideal building performance benchmarks are systematically structured and stored in parallel with actual performance data. The Ideal Building Framework utilises transformed data in the form of the Ideal Set of Performance Objectives and Metrics which are capable of defining the performance of any building at any stage of the BLC. It is proposed that the union of Scenario Models for an individual building would result in a building specific Combination of Performance Metrics which would in turn be stored in the BLC data repository. The Ideal Data Set underpins the Ideal Set of Performance Objectives and Metrics and is the set of measurements required to monitor the performance of the Ideal Building. A Model View describes the unique building specific data relevant to a particular project stakeholder. The energy management data and information exchange requirements that underlie a Model View implementation are detailed and incorporate traditional and proposed energy management. This thesis also specifies the Model View Methodology which complements the Ideal Building Framework. The developed Model View and Rule Set methodology process utilises stakeholder specific rule sets to define stakeholder pertinent environmental and energy performance data. This generic process further enables each stakeholder to define the resolution of data desired. For example, basic, intermediate or detailed. The Model View methodology is applicable for all project stakeholders, each requiring its own customised rule set. Two rule sets are defined in detail, the Energy Manager rule set and the LEED Accreditor rule set. This particular measurement generation process accompanied by defined View would filter and expedite data access for all stakeholders involved in building performance. Information presentation is critical for effective use of the data provided by the Ideal Building Framework and the Energy Management View definition. The specifications for a customised Information Delivery Tool account for the established profile of Energy Managers and best practice user interface design. Components of the developed tool could also be used by Facility Managers working at the tactical and strategic levels of organisations. Informed decision making is made possible through specified decision assistance processes which incorporate the Scenario Modelling and Benchmarking techniques, the Ideal Building Framework, the Energy Manager Model View, the Information Delivery Tool and the established profile of Energy Managers. The Model View and Rule Set Methodology is effectively demonstrated on an appropriate mixed use existing ‘green’ building, the Environmental Research Institute at University College Cork, using the Energy Management and LEED rule sets. Informed Decision Making is also demonstrated using a prototype scenario for the demonstration building.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ireland experienced two critical junctures when its economic survival was threatened: 1958/9 and 1986/7. Common to both crises was the supplanting of long established practices, that had become an integral part of the political culture of the state, by new ideas that ensured eventual economic recovery. In their adoption and implementation these ideas also fundamentally changed the institutions of state – how politics was done, how it was organised and regulated. The end result was the transformation of the Irish state. The main hypothesis of this thesis is that at those critical junctures the political and administrative elites who enabled economic recovery were not just making pragmatic decisions, their actions were influenced by ideas. Systematic content analysis of the published works of the main ideational actors, together with primary interviews with those actors still alive, reveals how their ideas were formed, what influenced them, and how they set about implementing their ideas. As the hypothesis assumes institutional change over time historical institutionalism serves as the theoretical framework. Central to this theory is the idea that choices made when a policy is being initiated or an institution formed will have a continuing influence long into the future. Institutions of state become ‘path dependent’ and impervious to change – the forces of inertia take over. That path dependency is broken at critical junctures. At those moments ideas play a major role as they offer a set of ready-made solutions. Historical institutionalism serves as a robust framework for proving that in the transformation of Ireland the role of ideas in punctuating institutional path dependency at critical junctures was central.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis is focused on the investigation of magnetic materials for high-power dcdc converters in hybrid and fuel cell vehicles and the development of an optimized high-power inductor for a multi-phase converter. The thesis introduces the power system architectures for hybrid and fuel cell vehicles. The requirements for power electronic converters are established and the dc-dc converter topologies of interest are introduced. A compact and efficient inductor is critical to reduce the overall cost, weight and volume of the dc-dc converter and optimize vehicle driving range and traction power. Firstly, materials suitable for a gapped CC-core inductor are analyzed and investigated. A novel inductor-design algorithm is developed and automated in order to compare and contrast the various magnetic materials over a range of frequencies and ripple ratios. The algorithm is developed for foil-wound inductors with gapped CC-cores in the low (10 kHz) to medium (30 kHz) frequency range and investigates the materials in a natural-convection-cooled environment. The practical effects of frequency, ripple, air-gap fringing, and thermal configuration are investigated next for the iron-based amorphous metal and 6.5 % silicon steel materials. A 2.5 kW converter is built to verify the optimum material selection and thermal configuration over the frequency range and ripple ratios of interest. Inductor size can increase in both of these laminated materials due to increased airgap fringing losses. Distributing the airgap is demonstrated to reduce the inductor losses and size but has practical limitations for iron-based amorphous metal cores. The effects of the manufacturing process are shown to degrade the iron-based amorphous metal multi-cut core loss. The experimental results also suggest that gap loss is not a significant consideration in these experiments. The predicted losses by the equation developed by Reuben Lee and cited by Colonel McLyman are significantly higher than the experimental results suggest. Iron-based amorphous metal has better preformance than 6.5 % silicon steel when a single cut core and natural-convection-cooling are used. Conduction cooling, rather than natural convection, can result in the highest power density inductor. The cooling for these laminated materials is very dependent on the direction of the lamination and the component mounting. Experimental results are produced showing the effects of lamination direction on the cooling path. A significant temperature reduction is demonstrated for conduction cooling versus natural-convection cooling. Iron-based amorphous metal and 6.5% silicon steel are competitive materials when conduction cooled. A novel inductor design algorithm is developed for foil-wound inductors with gapped CC-cores for conduction cooling of core and copper. Again, conduction cooling, rather than natural convection, is shown to reduce the size and weight of the inductor. The weight of the 6.5 % silicon steel inductor is reduced by around a factor of ten compared to natural-convection cooling due to the high thermal conductivity of the material. The conduction cooling algorithm is used to develop high-power custom inductors for use in a high power multi-phase boost converter. Finally, a high power digitally-controlled multi-phase boost converter system is designed and constructed to test the high-power inductors. The performance of the inductors is compared to the predictions used in the design process and very good correlation is achieved. The thesis results have been documented at IEEE APEC, PESC and IAS conferences in 2007 and at the IEEE EPE conference in 2008.