119 resultados para Machines à vecteurs de support
Resumo:
In this paper, downscaling models are developed using a support vector machine (SVM) for obtaining projections of monthly mean maximum and minimum temperatures (T-max and T-min) to river-basin scale. The effectiveness of the model is demonstrated through application to downscale the predictands for the catchment of the Malaprabha reservoir in India, which is considered to be a climatically sensitive region. The probable predictor variables are extracted from (1) the National Centers for Environmental Prediction (NCEP) reanalysis dataset for the period 1978-2000, and (2) the simulations from the third-generation Canadian Coupled Global Climate Model (CGCM3) for emission scenarios A1B, A2, B1 and COMMIT for the period 1978-2100. The predictor variables are classified into three groups, namely A, B and C. Large-scale atmospheric variables Such as air temperature, zonal and meridional wind velocities at 925 nib which are often used for downscaling temperature are considered as predictors in Group A. Surface flux variables such as latent heat (LH), sensible heat, shortwave radiation and longwave radiation fluxes, which control temperature of the Earth's surface are tried as plausible predictors in Group B. Group C comprises of all the predictor variables in both the Groups A and B. The scatter plots and cross-correlations are used for verifying the reliability of the simulation of the predictor variables by the CGCM3 and to Study the predictor-predictand relationships. The impact of trend in predictor variables on downscaled temperature was studied. The predictor, air temperature at 925 mb showed an increasing trend, while the rest of the predictors showed no trend. The performance of the SVM models that are developed, one for each combination of predictor group, predictand, calibration period and location-based stratification (land, land and ocean) of climate variables, was evaluated. In general, the models which use predictor variables pertaining to land surface improved the performance of SVM models for downscaling T-max and T-min
Resumo:
High end network security applications demand high speed operation and large rule set support. Packet classification is the core functionality that demands high throughput in such applications. This paper proposes a packet classification architecture to meet such high throughput. We have implemented a Firewall with this architecture in reconflgurable hardware. We propose an extension to Distributed Crossproducting of Field Labels (DCFL) technique to achieve scalable and high performance architecture. The implemented Firewall takes advantage of inherent structure and redundancy of rule set by using our DCFL Extended (DCFLE) algorithm. The use of DCFLE algorithm results in both speed and area improvement when it is implemented in hardware. Although we restrict ourselves to standard 5-tuple matching, the architecture supports additional fields. High throughput classification invariably uses Ternary Content Addressable Memory (TCAM) for prefix matching, though TCAM fares poorly in terms of area and power efficiency. Use of TCAM for port range matching is expensive, as the range to prefix conversion results in large number of prefixes leading to storage inefficiency. Extended TCAM (ETCAM) is fast and the most storage efficient solution for range matching. We present for the first time a reconfigurable hardware implementation of ETCAM. We have implemented our Firewall as an embedded system on Virtex-II Pro FPGA based platform, running Linux with the packet classification in hardware. The Firewall was tested in real time with 1 Gbps Ethernet link and 128 sample rules. The packet classification hardware uses a quarter of logic resources and slightly over one third of memory resources of XC2VP30 FPGA. It achieves a maximum classification throughput of 50 million packet/s corresponding to 16 Gbps link rate for the worst case packet size. The Firewall rule update involves only memory re-initialization in software without any hardware change.
Resumo:
This paper presents the programming an FPGA (Field Programmable Gate Array) to emulate the dynamics of DC machines. FPGA allows high speed real time simulation with high precision. The described design includes block diagram representation of DC machine, which contain all arithmetic and logical operations. The real time simulation of the machine in FPGA is controlled by user interfaces they are Keypad interface, LCD display on-line and digital to analog converter. This approach provides emulation of electrical machine by changing the parameters. Separately Exited DC machine implemented and experimental results are presented.
Resumo:
The standard free energies of formation of CaO derived from a variety of high-temperature equilibrium measurements made by seven groups of experimentalists are significantly different from those given in the standard compilations of thermodynamic data. Indirect support for the validity of the compiled data comes from new solid-state electrochemical measurements using single-crystal CaF2 and SrF2 as electrolytes. The change in free energy for the following reactions are obtained: CaO + MgF2 --> MgO + CaF2 Delta G degrees = -68,050 -2.47 T(+/-100) J mol(-1) SrO + CaF2 --> SrF2 + CaO Delta G degrees = -35,010 + 6.39 T (+/-80) J mol(-1) The standard free energy changes associated with cell reactions agree with data in standard compilations within +/- 4 kJ mol(-1). The results of this study do not support recent suggestions for a major revision in thermodynamic data for CaO.
Resumo:
This paper presents real-time simulation models of electrical machines on FPGA platform. Implementation of the real-time numerical integration methods with digital logic elements is discussed. Several numerical integrations are presented. A real-time simulation of DC machine is carried out on this FPGA platform and important transient results are presented. These results are compared to simulation results obtained through a commercial off-line simulation software.
Resumo:
High end network security applications demand high speed operation and large rule set support. Packet classification is the core functionality that demands high throughput in such applications. This paper proposes a packet classification architecture to meet such high throughput. We have Implemented a Firewall with this architecture in reconfigurable hardware. We propose an extension to Distributed Crossproducting of Field Labels (DCFL) technique to achieve scalable and high performance architecture. The implemented Firewall takes advantage of inherent structure and redundancy of rule set by using, our DCFL Extended (DCFLE) algorithm. The use of DCFLE algorithm results In both speed and area Improvement when It is Implemented in hardware. Although we restrict ourselves to standard 5-tuple matching, the architecture supports additional fields.High throughput classification Invariably uses Ternary Content Addressable Memory (TCAM) for prefix matching, though TCAM fares poorly In terms of area and power efficiency. Use of TCAM for port range matching is expensive, as the range to prefix conversion results in large number of prefixes leading to storage inefficiency. Extended TCAM (ETCAM) is fast and the most storage efficient solution for range matching. We present for the first time a reconfigurable hardware Implementation of ETCAM. We have implemented our Firewall as an embedded system on Virtex-II Pro FPGA based platform, running Linux with the packet classification in hardware. The Firewall was tested in real time with 1 Gbps Ethernet link and 128 sample rules. The packet classification hardware uses a quarter of logic resources and slightly over one third of memory resources of XC2VP30 FPGA. It achieves a maximum classification throughput of 50 million packet/s corresponding to 16 Gbps link rate for file worst case packet size. The Firewall rule update Involves only memory re-initialiization in software without any hardware change.
Resumo:
An interaction analysis has been conducted to study the effects of a local loss of support beneath the beam footing of a two-bay plane frame. The results of the study indicate that the magnitude of increase in the bending moment and axial force in the structure due to the presence of a void are dependent, not only on the extent of support loss, but also on the relative stiffnesses between foundation beam and soil, and between superstructure and soil. The increase in bending moment even for a void span of 1/12 of the foundation beam length can become so significant as to exceed the safety provisions. The study shows that the effect of a void on the superstructure moments can be greatly minimized by a combination of rigid foundation and flexible superstructure.
Resumo:
A simple yet efficient method for the minimization of incompletely specified sequential machines (ISSMs) is proposed. Precise theorems are developed, as a consequence of which several compatibles can be deleted from consideration at the very first stage in the search for a minimal closed cover. Thus, the computational work is significantly reduced. Initial cardinality of the minimal closed cover is further reduced by a consideration of the maximal compatibles (MC's) only; as a result the method converges to the solution faster than the existing procedures. "Rank" of a compatible is defined. It is shown that ordering the compatibles, in accordance with their rank, reduces the number of comparisons to be made in the search for exclusion of compatibles. The new method is simple, systematic, and programmable. It does not involve any heuristics or intuitive procedures. For small- and medium-sized machines, it canle used for hand computation as well. For one of the illustrative examples used in this paper, 30 out of 40 compatibles can be ignored in accordance with the proposed rules and the remaining 10 compatibles only need be considered for obtaining a minimal solution.
Resumo:
The conclusion that the number of species co-existing within a biological community cannot exceed the number of limiting factors is not valid if we assume that (i) the relative efficiency of two competing species in utilizing a resource is not independent of the resource density, but one species may be more efficient at a lower density and less efficient at a higher density and (ii) there is a spatial or temporal heterogeneity in the density of the resource. This spatial or temporal heterogeneity does not have to be furnished by factors external to the biological community, but may be generated within the biological community itself as in the case of a vertical gradient of light in a plant community. This possibility of a stable co-existence of more than one species in a community limited by a single resource, even when the resource is being supplied uniformly in space and time, is formally demonstrated.
Resumo:
A simple procedure for the state minimization of an incompletely specified sequential machine whose number of internal states is not very large is presented. It introduces the concept of a compatibility graph from which the set of maximal compatibles of the machine can be very conveniently derived. Primary and secondary implication trees associated with each maximal compatible are then constructed. The minimal state machine covering the incompletely specified machine is then obtained from these implication trees.
Resumo:
Urban sprawl is the outgrowth along the periphery of cities and along highways. Although an accurate definition of urban sprawl may be debated, a consensus is that urban sprawl is characterized by an unplanned and uneven pattern of growth, driven by multitude of processes and leading to inefficient resource utilization. Urbanization in India has never been as rapid as it is in recent times. As one of the fastest growing economies in the world, India faces stiff challenges in managing the urban sprawl, while ensuring effective delivery of basic services in urban areas. The urban areas contribute significantly to the national economy (more than 50% of GDP), while facing critical challenges in accessing basic services and necessary infrastructure, both social and economic. The overall rise in the population of the urban poor or the increase in travel times due to congestion along road networks are indicators of the effectiveness of planning and governance in assessing and catering for this demand. Agencies of governance at all levels: local bodies, state government and federal government, are facing the brunt of this rapid urban growth. It is imperative for planning and governance to facilitate, augment and service the requisite infrastructure over time systematically. Provision of infrastructure and assurance of the delivery of basic services cannot happen overnight and hence planning has to facilitate forecasting and service provision with appropriate financial mechanisms.
Resumo:
Screening and early identification of primary immunodeficiency disease (PID) genes is a major challenge for physicians. Many resources have catalogued molecular alterations in known PID genes along with their associated clinical and immunological phenotypes. However, these resources do not assist in identifying candidate PID genes. We have recently developed a platform designated Resource of Asian PDIs, which hosts information pertaining to molecular alterations, protein-protein interaction networks, mouse studies and microarray gene expression profiling of all known PID genes. Using this resource as a discovery tool, we describe the development of an algorithm for prediction of candidate PID genes. Using a support vector machine learning approach, we have predicted 1442 candidate PID genes using 69 binary features of 148 known PID genes and 3162 non-PID genes as a training data set. The power of this approach is illustrated by the fact that six of the predicted genes have recently been experimentally confirmed to be PID genes. The remaining genes in this predicted data set represent attractive candidates for testing in patients where the etiology cannot be ascribed to any of the known PID genes.
Resumo:
With the increasing adoption of wireless technology, it is reasonable to expect an increase in file demand for supporting both real-time multimedia and high rate reliable data services. Next generation wireless systems employ Orthogonal Frequency Division Multiplexing (OFDM) physical layer owing, to the high data rate transmissions that are possible without increase in bandwidth. Towards improving file performance of these systems, we look at the design of resource allocation algorithms at medium-access layer, and their impact on higher layers. While TCP-based clastic traffic needs reliable transport, UDP-based real-time applications have stringent delay and rate requirements. The MAC algorithms while catering to the heterogeneous service needs of these higher layers, tradeoff between maximizing the system capacity and providing fairness among users. The novelly of this work is the proposal of various channel-aware resource allocation algorithms at the MAC layer. which call result in significant performance gains in an OFDM based wireless system.