938 resultados para Requests
Resumo:
The optimized design of magnetic field for a cold yoke superconducting solenoid is introduced in this paper. Using some kinds of optimization designs and OPERA, we optimize the main solenoid, cold yoke and compensated winding. Through this design, the requests of the superconducting solenoid are realized.
Resumo:
机器人研磨抛光工艺研究建立在大量机器人磨抛试验的基础上。本文针对加工对象——有机玻璃,在满足被加工工件质量的前提下,确定了机器人研磨抛光加工时磨片的合理使用顺序、规划加工路径和安排正交试验,以获得机器人磨抛加工的最优工艺参数组合,并制定机器人磨抛的加工策略。最后通过机器人研磨抛光加工实例,进一步验证了机器人的研磨抛光工艺知识有其合理性。
Resumo:
根据我国正在研制开发的某深海载人潜水器的特性及其对载人潜水器动力定位控制的要求,采用最优控制方法LQR与递推辨识系统参数相结合的方法———自适应LQR方法进行控制。仿真结果表明这种方法具有良好的控制效果。
Resumo:
随着机器人技术的不断发展,出现了合作多移动机器人系统这一新的研究和应用领域,随之而来的是对机器人控制体系的新的要求.本文分析了合作多移动机器人系统对单机控制体系结构的要求,并以此为背景,在比较两种典型的智能机器人体系结构的基础上,提出一种混合分层的体系结构
Resumo:
With the development of oil and gas exploration, the exploration of the continental oil and gas turns into the exploration of the subtle oil and gas reservoirs from the structural oil and gas reservoirs in China. The reserves of the found subtle oil and gas reservoirs account for more than 60 percent of the in the discovered oil and gas reserves. Exploration of the subtle oil and gas reservoirs is becoming more and more important and can be taken as the main orientation for the increase of the oil and gas reserves. The characteristics of the continental sedimentary facies determine the complexities of the lithological exploration. Most of the continental rift basins in East China have entered exploration stages of medium and high maturity. Although the quality of the seismic data is relatively good, this areas have the characteristics of the thin sand thickness, small faults, small range of the stratum. It requests that the seismic data have high resolution. It is a important task how to improve the signal/noise ratio of the high frequency of seismic data. In West China, there are the complex landforms, the deep embedding the targets of the prospecting, the complex geological constructs, many ruptures, small range of the traps, the low rock properties, many high pressure stratums and difficulties of boring well. Those represent low signal/noise ratio and complex kinds of noise in the seismic records. This needs to develop the method and technique of the noise attenuation in the data acquisition and processing. So that, oil and gas explorations need the high resolution technique of the geophysics in order to solve the implementation of the oil resources strategy for keep oil production and reserves stable in Ease China and developing the crude production and reserves in West China. High signal/noise ratio of seismic data is the basis. It is impossible to realize for the high resolution and high fidelity without the high signal/noise ratio. We play emphasis on many researches based on the structure analysis for improving signal/noise ratio of the complex areas. Several methods are put forward for noise attenuation to truly reflect the geological features. Those can reflect the geological structures, keep the edges of geological construction and improve the identifications of the oil and gas traps. The ideas of emphasize the foundation, give prominence to innovate, and pay attention to application runs through the paper. The dip-scanning method as the center of the scanned point inevitably blurs the edges of geological features, such as fault and fractures. We develop the new dip scanning method in the shap of end with two sides scanning to solve this problem. We bring forward the methods of signal estimation with the coherence, seismic wave characteristc with coherence, the most homogeneous dip-sanning for the noise attenuation using the new dip-scanning method. They can keep the geological characters, suppress the random noise and improve the s/n ratio and resolution. The rutine dip-scanning is in the time-space domain. Anew method of dip-scanning in the frequency-wavenumber domain for the noise attenuation is put forward. It use the quality of distinguishing between different dip events of the reflection in f-k domain. It can reduce the noise and gain the dip information. We describe a methodology for studying and developing filtering methods based on differential equations. It transforms the filtering equations in the frequency domain or the f-k domain into time or time-space domains, and uses a finite-difference algorithm to solve these equations. This method does not require that seismic data be stationary, so their parameters can vary at every temporal and spatial point. That enhances the adaptability of the filter. It is computationally efficient. We put forward a method of matching pursuits for the noise suppression. This method decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. It can extract the effective signal from the noisy signal and reduce the noise. We introduce the beamforming filtering method for the noise elimination. Real seismic data processing shows that it is effective in attenuating multiples and internal multiples. The s/n ratio and resolution are improved. The effective signals have the high fidelity. Through calculating in the theoretic model and applying it to the real seismic data processing, it is proved that the methods in this paper can effectively suppress the random noise, eliminate the cohence noise, and improve the resolution of the seismic data. Their practicability is very better. And the effect is very obvious.
Resumo:
We present an algorithm to store data robustly in a large, geographically distributed network by means of localized regions of data storage that move in response to changing conditions. For example, data might migrate away from failures or toward regions of high demand. The PersistentNode algorithm provides this service robustly, but with limited safety guarantees. We use the RAMBO framework to transform PersistentNode into RamboNode, an algorithm that guarantees atomic consistency in exchange for increased cost and decreased liveness. In addition, a half-life analysis of RamboNode shows that it is robust against continuous low-rate failures. Finally, we provide experimental simulations for the algorithm on 2000 nodes, demonstrating how it services requests and examining how it responds to failures.
Resumo:
This research is concerned with designing representations for analytical reasoning problems (of the sort found on the GRE and LSAT). These problems test the ability to draw logical conclusions. A computer program was developed that takes as input a straightforward predicate calculus translation of a problem, requests additional information if necessary, decides what to represent and how, designs representations capturing the constraints of the problem, and creates and executes a LISP program that uses those representations to produce a solution. Even though these problems are typically difficult for theorem provers to solve, the LISP program that uses the designed representations is very efficient.
Resumo:
PILOT is a programming system constructed in LISP. It is designed to facilitate the development of programs by easing the familiar sequence: write some code, run the program, make some changes, write some more code, run the program again, etc. As a program becomes more complex, making these changes becomes harder and harder because the implications of changes are harder to anticipate. In the PILOT system, the computer plays an active role in this evolutionary process by providing the means whereby changes can be effected immediately, and in ways that seem natural to the user. The user of PILOT feels that he is giving advice, or making suggestions, to the computer about the operation of his programs, and that the system then performs the work necessary. The PILOT system is thus an interface between the user and his program, monitoring both in the requests of the user and operation of his program. The user may easily modify the PILOT system itself by giving it advice about its own operation. This allows him to develop his own language and to shift gradually onto PILOT the burden of performing routine but increasingly complicated tasks. In this way, he can concentrate on the conceptual difficulties in the original problem, rather than on the niggling tasks of editing, rewriting, or adding to his programs. Two detailed examples are presented. PILOT is a first step toward computer systems that will help man to formulate problems in the same way they now help him to solve them. Experience with it supports the claim that such "symbiotic systems" allow the programmer to attack and solve more difficult problems.
Resumo:
Build is a tool for keeping modular systems in a consistent state by managing the construction tasks (e.g. compilation, linking, etc.) associated with such systems. It employs a user supplied system model and a procedural description of a task to be performed in order to perform the task. This differs from existing tools which do not explicitly separate knowledge about systems from knowledge about how systems are manipulated. BUILD provides a static framework for modeling systems and handling construction requests that makes use of programming environment specific definitions. By altering the set of definitions, BUILD can be extended to work with new programming environments to perform new tasks.
Resumo:
The exchange of information between the police and community partners forms a central aspect of effective community service provision. In the context of policing, a robust and timely communications mechanism is required between police agencies and community partner domains, including: Primary healthcare (such as a Family Physician or a General Practitioner); Secondary healthcare (such as hospitals); Social Services; Education; and Fire and Rescue services. Investigations into high-profile cases such as the Victoria Climbié murder in 2000, the murders of Holly Wells and Jessica Chapman in 2002, and, more recently, the death of baby Peter Connelly through child abuse in 2007, highlight the requirement for a robust information-sharing framework. This paper presents a novel syntax that supports information-sharing requests, within strict data-sharing policy definitions. Such requests may form the basis for any information-sharing agreement that can exist between the police and their community partners. It defines a role-based architecture, with partner domains, with a syntax for the effective and efficient information sharing, using SPoC (Single Point-of-Contact) agents to control in-formation exchange. The application of policy definitions using rules within these SPoCs is inspired by network firewall rules and thus define information exchange permissions. These rules can be imple-mented by software filtering agents that act as information gateways between partner domains. Roles are exposed from each domain to give the rights to exchange information as defined within the policy definition. This work involves collaboration with the Scottish Police, as part of the Scottish Institute for Policing Research (SIPR), and aims to improve the safety of individuals by reducing risks to the community using enhanced information-sharing mechanisms.
Resumo:
Urquhart, C., Turner, J., Durbin, J. & Ryan, J. (2006). Evaluating the contribution of the clinical librarian to a multidisciplinary team. Library and Information Research, 30(94), 30-43. Sponsorship: NHS Trusts in North Wales
Resumo:
Cooper, J. & Urquhart, C. (2005). The information needs and information-seeking behaviours of home-care workers and clients receiving home care. Health Information and Libraries Journal, 22(2), 107-116. Sponsorship: AHRC
Resumo:
Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Ciências Farmacêuticas
Resumo:
This paper presents a new approach to window-constrained scheduling, suitable for multimedia and weakly-hard real-time systems. We originally developed an algorithm, called Dynamic Window-Constrained Scheduling (DWCS), that attempts to guarantee no more than x out of y deadlines are missed for real-time jobs such as periodic CPU tasks, or delay-constrained packet streams. While DWCS is capable of generating a feasible window-constrained schedule that utilizes 100% of resources, it requires all jobs to have the same request periods (or intervals between successive service requests). We describe a new algorithm called Virtual Deadline Scheduling (VDS), that provides window-constrained service guarantees to jobs with potentially different request periods, while still maximizing resource utilization. VDS attempts to service m out of k job instances by their virtual deadlines, that may be some finite time after the corresponding real-time deadlines. Notwithstanding, VDS is capable of outperforming DWCS and similar algorithms, when servicing jobs with potentially different request periods. Additionally, VDS is able to limit the extent to which a fraction of all job instances are serviced late. Results from simulations show that VDS can provide better window-constrained service guarantees than other related algorithms, while still having as good or better delay bounds for all scheduled jobs. Finally, an implementation of VDS in the Linux kernel compares favorably against DWCS for a range of scheduling loads.
Resumo:
With the increasing demand for document transfer services such as the World Wide Web comes a need for better resource management to reduce the latency of documents in these systems. To address this need, we analyze the potential for document caching at the application level in document transfer services. We have collected traces of actual executions of Mosaic, reflecting over half a million user requests for WWW documents. Using those traces, we study the tradeoffs between caching at three levels in the system, and the potential for use of application-level information in the caching system. Our traces show that while a high hit rate in terms of URLs is achievable, a much lower hit rate is possible in terms of bytes, because most profitably-cached documents are small. We consider the performance of caching when applied at the level of individual user sessions, at the level of individual hosts, and at the level of a collection of hosts on a single LAN. We show that the performance gain achievable by caching at the session level (which is straightforward to implement) is nearly all of that achievable at the LAN level (where caching is more difficult to implement). However, when resource requirements are considered, LAN level caching becomes much more desirable, since it can achieve a given level of caching performance using a much smaller amount of cache space. Finally, we consider the use of organizational boundary information as an example of the potential for use of application-level information in caching. Our results suggest that distinguishing between documents produced locally and those produced remotely can provide useful leverage in designing caching policies, because of differences in the potential for sharing these two document types among multiple users.