844 resultados para national reserve system
Resumo:
The national science project HIRFL-CSR has recently been officially accepted. As a cyclotron and synchotron complex, it puts some particularly high demands on the control system. There are hundreds of pieces of equipment that need to be synchronized. An integrated timing control system is built to meet these demands. The output rate and the accuracy of the controller are 16 bit/mu s. The accuracy of the time delay reaches 40 ns. The timing control system is based on a typical event distribution system, which adopts the new event generation and the distribution scheme. The scheme of the tuning control system with innovation points, the architecture and the implemented method are presented in the paper.
Resumo:
An advanced superconducting ECR ion source named SECRAL has been constructed at Institute of Modern Physics of Chinese Academy of Sciences, whose superconducting magnet assembly consists of three axial solenoid coils and six sextupole coils with a cold iron structure as field booster and clamp. In order to investigate the structure of sextupole coils and to increase the structural reliabilities of the magnet system, global and local structural analysis have been performed in various operation scenarios. Winding pack and support structure design of magnet system, mechanical calculation and stress analysis are given in this paper. From the analysis results, it has been found that the magnet system is safe in the referential operation scenarios and the configuration of the magnet complies with design requirements of the SECRAL.
Resumo:
A dynamic measurement system was developed by the Institute of Modern Physics (IMP) for the dipole prototype of Rapid Cycle Synchrotron (RCS) of China Spallation Neutron Source (CSNS). The repetition frequency of RCS is 25 Hz. The probe is a moving arc searching-coil, and the data acquisition system is based on the dynamic analysis modular of National Instrument. To get the error of high order harmonics of the field at basic frequency, the hardware integrator is replaced by a high speed ADC with software filter and integrator. A series of harmonic coefficients of field are used to express the varieties of dynamic fields in space and time simultaneously. The measurement system has been tested in Institute of High Energy Physics (IHEP), and the property of the dipole prototype of RCS has been measured. Some measurement results and the repeatability of system are illustrated in this paper.
Resumo:
Knowledge Innovation Project of Chinese Academy of Sciences [KZCX3-SW-347]; National Science Fund for Distinguished Young Scholar [40225004]
Resumo:
Sustainable water use is seriously compromised in the North China Plain (NCP) due to the huge water requirements of agriculture, the largest use of water resources. An integrated approach which combines the ecosystem model with emergy analysis is presented to determine the optimum quantity of irrigation for sustainable development in irrigated cropping systems. Since the traditional emergy method pays little attention to the dynamic interaction among components of the ecological system and dynamic emergy accounting is in its infancy, it is hard to evaluate the cropping system in hypothetical situations or in response to specific changes. In order to solve this problem, an ecosystem model (Vegetation Interface Processes (VIP) model) is introduced for emergy analysis to describe the production processes. Some raw data, collected by investigating or observing in conventional emergy analysis, may be calculated by the VIP model in the new approach. To demonstrate the advantage of this new approach, we use it to assess the wheat-maize rotation cropping system at different irrigation levels and derive the optimum quantity of irrigation according to the index of ecosystem sustainable development in NCP. The results show, the optimum quantity of irrigation in this region should be 240-330 mm per year in the wheat system and no irrigation in the maize system, because with this quantity of irrigation the rotation crop system reveals: best efficiency in energy transformation (transformity = 6.05E + 4 sej/J); highest sustainability (renewability = 25%); lowest environmental impact (environmental loading ratio = 3.5) and the greatest sustainability index (Emergy Sustainability Index = 0.47) compared with the system in other irrigation amounts. This study demonstrates that application of the new approach is broader than the conventional emergy analysis and the new approach is helpful in optimizing resources allocation, resource-savings and maintaining agricultural sustainability.
Resumo:
We developed a direct partitioning method to construct a seamless discrete global grid system (DGGS) with any resolution based on a two-dimensional projected plane and the earth ellipsoid. This DGGS is composed of congruent square grids over the projected plane and irregular ellipsoidal quadrilaterals on the ellipsoidal surface. A new equal area projection named the parallels plane (PP) projection derived from the expansion of the central meridian and parallels has been employed to perform the transformation between the planar squares and the corresponding ellipsoidal grids. The horizontal sides of the grids are parts of the parallel circles and the vertical sides are complex ellipsoidal curves, which can be obtained by the inverse expression of the PP projection. The partition strategies, transformation equations, geometric characteristics and distortions for this DGGS have been discussed. Our analysis proves that the DGGS is area-preserving while length distortions only occur on the vertical sides off the central meridian. Angular and length distortions positively correlate to the increase in latitudes and the spanning of longitudes away from a chosen central meridian. This direct partition only generates a small number of broken grids that can be treated individually.
Resumo:
Recent studies have noted that vertex degree in the autonomous system (AS) graph exhibits a highly variable distribution [15, 22]. The most prominent explanatory model for this phenomenon is the Barabási-Albert (B-A) model [5, 2]. A central feature of the B-A model is preferential connectivity—meaning that the likelihood a new node in a growing graph will connect to an existing node is proportional to the existing node’s degree. In this paper we ask whether a more general explanation than the B-A model, and absent the assumption of preferential connectivity, is consistent with empirical data. We are motivated by two observations: first, AS degree and AS size are highly correlated [11]; and second, highly variable AS size can arise simply through exponential growth. We construct a model incorporating exponential growth in the size of the Internet, and in the number of ASes. We then show via analysis that such a model yields a size distribution exhibiting a power-law tail. In such a model, if an AS’s link formation is roughly proportional to its size, then AS degree will also show high variability. We instantiate such a model with empirically derived estimates of growth rates and show that the resulting degree distribution is in good agreement with that of real AS graphs.
Resumo:
This paper examines how and why web server performance changes as the workload at the server varies. We measure the performance of a PC acting as a standalone web server, running Apache on top of Linux. We use two important tools to understand what aspects of software architecture and implementation determine performance at the server. The first is a tool that we developed, called WebMonitor, which measures activity and resource consumption, both in the operating system and in the web server. The second is the kernel profiling facility distributed as part of Linux. We vary the workload at the server along two important dimensions: the number of clients concurrently accessing the server, and the size of the documents stored on the server. Our results quantify and show how more clients and larger files stress the web server and operating system in different and surprising ways. Our results also show the importance of fixed costs (i.e., opening and closing TCP connections, and updating the server log) in determining web server performance.
Resumo:
We examine the question of whether to employ the first-come-first-served (FCFS) discipline or the processor-sharing (PS) discipline at the hosts in a distributed server system. We are interested in the case in which service times are drawn from a heavy-tailed distribution, and so have very high variability. Traditional wisdom when task sizes are highly variable would prefer the PS discipline, because it allows small tasks to avoid being delayed behind large tasks in a queue. However, we show that system performance can actually be significantly better under FCFS queueing, if each task is assigned to a host based on the task's size. By task assignment, we mean an algorithm that inspects incoming tasks and assigns them to hosts for service. The particular task assignment policy we propose is called SITA-E: Size Interval Task Assignment with Equal Load. Surprisingly, under SITA-E, FCFS queueing typically outperforms the PS discipline by a factor of about two, as measured by mean waiting time and mean slowdown (waiting time of task divided by its service time). We compare the FCFS/SITA-E policy to the processor-sharing case analytically; in addition we compare it to a number of other policies in simulation. We show that the benefits of SITA-E are present even in small-scale distributed systems (four or more hosts). Furthermore, SITA-E is a static policy that does not incorporate feedback knowledge of the state of the hosts, which allows for a simple and scalable implementation.
Resumo:
We consider the problem of task assignment in a distributed system (such as a distributed Web server) in which task sizes are drawn from a heavy-tailed distribution. Many task assignment algorithms are based on the heuristic that balancing the load at the server hosts will result in optimal performance. We show this conventional wisdom is less true when the task size distribution is heavy-tailed (as is the case for Web file sizes). We introduce a new task assignment policy, called Size Interval Task Assignment with Variable Load (SITA-V). SITA-V purposely operates the server hosts at different loads, and directs smaller tasks to the lighter-loaded hosts. The result is that SITA-V provably decreases the mean task slowdown by significant factors (up to 1000 or more) where the more heavy-tailed the workload, the greater the improvement factor. We evaluate the tradeoff between improvement in slowdown and increase in waiting time in a system using SITA-V, and show conditions under which SITA-V represents a particularly appealing policy. We conclude with a discussion of the use of SITA-V in a distributed Web server, and show that it is attractive because it has a simple implementation which requires no communication from the server hosts back to the task router.
Resumo:
Recent work has shown the prevalence of small-world phenomena [28] in many networks. Small-world graphs exhibit a high degree of clustering, yet have typically short path lengths between arbitrary vertices. Internet AS-level graphs have been shown to exhibit small-world behaviors [9]. In this paper, we show that both Internet AS-level and router-level graphs exhibit small-world behavior. We attribute such behavior to two possible causes–namely the high variability of vertex degree distributions (which were found to follow approximately a power law [15]) and the preference of vertices to have local connections. We show that both factors contribute with different relative degrees to the small-world behavior of AS-level and router-level topologies. Our findings underscore the inefficacy of the Barabasi-Albert model [6] in explaining the growth process of the Internet, and provide a basis for more promising approaches to the development of Internet topology generators. We present such a generator and show the resemblance of the synthetic graphs it generates to real Internet AS-level and router-level graphs. Using these graphs, we have examined how small-world behaviors affect the scalability of end-system multicast. Our findings indicate that lower variability of vertex degree and stronger preference for local connectivity in small-world graphs results in slower network neighborhood expansion, and in longer average path length between two arbitrary vertices, which in turn results in better scaling of end system multicast.
Resumo:
We present a type system that can effectively facilitate the use of types in capturing invariants in stateful programs that may involve (sophisticated) pointer manipulation. With its root in a recently developed framework Applied Type System (ATS), the type system imposes a level of abstraction on program states by introducing a novel notion of recursive stateful views and then relies on a form of linear logic to reason about such views. We consider the design and then the formalization of the type system to constitute the primary contribution of the paper. In addition, we mention a prototype implementation of the type system and then give a variety of examples that attests to the practicality of programming with recursive stateful views.
Resumo:
Many people suffer from conditions that lead to deterioration of motor control and makes access to the computer using traditional input devices difficult. In particular, they may loose control of hand movement to the extent that the standard mouse cannot be used as a pointing device. Most current alternatives use markers or specialized hardware to track and translate a user's movement to pointer movement. These approaches may be perceived as intrusive, for example, wearable devices. Camera-based assistive systems that use visual tracking of features on the user's body often require cumbersome manual adjustment. This paper introduces an enhanced computer vision based strategy where features, for example on a user's face, viewed through an inexpensive USB camera, are tracked and translated to pointer movement. The main contributions of this paper are (1) enhancing a video based interface with a mechanism for mapping feature movement to pointer movement, which allows users to navigate to all areas of the screen even with very limited physical movement, and (2) providing a customizable, hierarchical navigation framework for human computer interaction (HCI). This framework provides effective use of the vision-based interface system for accessing multiple applications in an autonomous setting. Experiments with several users show the effectiveness of the mapping strategy and its usage within the application framework as a practical tool for desktop users with disabilities.
Resumo:
Classifying novel terrain or objects front sparse, complex data may require the resolution of conflicting information from sensors working at different times, locations, and scales, and from sources with different goals and situations. Information fusion methods can help resolve inconsistencies, as when evidence variously suggests that an object's class is car, truck, or airplane. The methods described here consider a complementary problem, supposing that information from sensors and experts is reliable though inconsistent, as when evidence suggests that an object's class is car, vehicle, and man-made. Underlying relationships among objects are assumed to be unknown to the automated system or the human user. The ARTMAP information fusion system used distributed code representations that exploit the neural network's capacity for one-to-many learning in order to produce self-organizing expert systems that discover hierarchical knowledge structures. The system infers multi-level relationships among groups of output classes, without any supervised labeling of these relationships.
Resumo:
Both animals and mobile robots, or animats, need adaptive control systems to guide their movements through a novel environment. Such control systems need reactive mechanisms for exploration, and learned plans to efficiently reach goal objects once the environment is familiar. How reactive and planned behaviors interact together in real time, and arc released at the appropriate times, during autonomous navigation remains a major unsolved problern. This work presents an end-to-end model to address this problem, named SOVEREIGN: A Self-Organizing, Vision, Expectation, Recognition, Emotion, Intelligent, Goal-oriented Navigation system. The model comprises several interacting subsystems, governed by systems of nonlinear differential equations. As the animat explores the environment, a vision module processes visual inputs using networks that arc sensitive to visual form and motion. Targets processed within the visual form system arc categorized by real-time incremental learning. Simultaneously, visual target position is computed with respect to the animat's body. Estimates of target position activate a motor system to initiate approach movements toward the target. Motion cues from animat locomotion can elicit orienting head or camera movements to bring a never target into view. Approach and orienting movements arc alternately performed during animat navigation. Cumulative estimates of each movement, based on both visual and proprioceptive cues, arc stored within a motor working memory. Sensory cues are stored in a parallel sensory working memory. These working memories trigger learning of sensory and motor sequence chunks, which together control planned movements. Effective chunk combinations arc selectively enhanced via reinforcement learning when the animat is rewarded. The planning chunks effect a gradual transition from reactive to planned behavior. The model can read-out different motor sequences under different motivational states and learns more efficient paths to rewarded goals as exploration proceeds. Several volitional signals automatically gate the interactions between model subsystems at appropriate times. A 3-D visual simulation environment reproduces the animat's sensory experiences as it moves through a simplified spatial environment. The SOVEREIGN model exhibits robust goal-oriented learning of sequential motor behaviors. Its biomimctic structure explicates a number of brain processes which are involved in spatial navigation.