868 resultados para wide area measurement system (WAMS)
Resumo:
提出一种适用于深海应答器坐标测量的方法:垂线相交法。这种方法利用立体几何原理,获得应答器坐标。即测量母船在距应答器适中的位置沿两条相互垂直的航迹航行,分别找到两条航迹上与应答器斜距最小的点,过这两个点在水平面上做两条垂线,交点的经纬度坐标就是应答器的经纬度坐标。分析了影响测量误差的重要因素,并提出测量原则以满足精度要求,使测量系统具有很好的鲁棒性。为提高测距精度,采用射线声学理论中的RRA算法对声线进行修正。仿真实验证明了垂线相交法的有效性。该测量方法对深度没有要求,简化了繁琐的现场操作和水声测量系统,具有很高的工程实用价值。
Resumo:
本文介绍了一种非接触式(LW-1型)机器人重复位姿精度检测系统.该系统采用电涡流传感器作为位置信息传感器.以这种传感器的检测性能为基础,研究设计了相应的传感器测量结构、数学模型和坐标变换求解方法,使系统技术指标及使用性能达到了检测机器人重复位姿精度的实用要求.该系统具有鲁棒性强、设计合理、结构简单、造价低廉等特点,可以满足我国现阶段在机器人学研究和机器人开发应用方面的需求
Resumo:
In order to carry out high-precision three-dimensional "integration" for the characteristics of the secondary seismic exploration for Biyang Depression, in the implementation process, through a combination of scientific research and production, summed up high-precision seismic acquisition, processing and interpretation technologies suitable for the eastern part of the old liberated areas, achieved the following results: 1. high-precision complex three-dimensional seismic exploration technology series suitable for shallow depression Biyang block group. To highlight the shallow seismic signal, apply goal-based observing system design, trail from the small panel to receive and protect the shallow treatment of a range of technologies; to explain the use of three-dimensional visualization and coherent combination of full-body three-dimensional fine interpretation identification of the 50-100 m below the unconformity surface and its formation of about 10 meters of the distribution of small faults and improve the small block and stratigraphic unconformity traps recognition. 2. high-precision series of three-dimensional seismic exploration technology suitable for deep depression Biyang low signal to noise ratio of information. Binding model using forward and lighting technology, wide-angle observation system covering the design, multiple suppression and raise the energy of deep seismic reflection processing and interpretation of detailed, comprehensive reservoir description, such as research and technology, identified a number of different types of traps. 3. high-precision seismic exploration technology series for the southern Biyang Depression high steep three-dimensional structure. The use of new technology of seismic wave scattering theory and high-precision velocity model based on pre-stack time migration and depth migration imaging of seismic data and other high-precision processing technology, in order to identify the southern steep slope of the local structure prediction and analysis of sandstone bedrock surface patterns provide a wealth of information.
Resumo:
The foundation of reservoir model and residual oil prediction have been the core of reservoir detailed description for improved oil production and enhanced oil recovery. The traditional way of sandstone correlation based on the geometrical similarity of well-logs which emphasizes "based on the cycle and correlating from larger to smaller" has shown its theoretical limits when explaining the correlating and the scale, geometry, continuity, connectivity of sandstones and the law of the reservoir property. It has been an urgent and difficult subject to find new theory and methods to solve the reservoir correlation and property prediction. It's a new way to correlate strata and found framework of reservoir through the process-response analysis in the base-level cycles. And it is also possible to analyze the reservoir property in reservoir framework. Taking the reservoir of zonation 6-10 in S3~2 of Pucheng Oil Field in Henan Province as an example, we founded the detailed reservoir stratigraphic framework through base-level correlation. In the strata frame, sediment distribution and its development are discussed based on sediment volume partitioning and facies differentiation analysis. Reservoir heterogeneities and its relation to base-level are also discussed. The analysis of primary oil distribution shows the base-level controlled oil distribution in reservoir. In this paper, subjects as following are discussed in detail. Based on the analysis of sedimentary structure and sedimentary energy, the facies model was founded. Founding stratigraphy framework through base level analysis In the studying zone, one long term cycle, 6 middle term cycles and 27 short term cycles was identified and correlated. 3 Predicting the property of reservoir for improving oil development The base level controlled the property of sandbody. The short and very short term cycle controlled the pattern of heterogeneities in sandbody, and the middle and long term cycle controlled the area and inter-layer heterogeneities. On the lower location of the middle and long term base level, the sandbody is well developed, with a wide area and large thickness, while on the high location of base level, there is an opposite reservoir character. 4 The studying of reservoir development response and oil distribution making a solid base for development adjustment Primary oil distribution is controlled by base level location. It tells that the sandbody on the high base level location was poor developed for its difficulty to develop. While on the low location of the base level, the sandbody is well developed for its relative easy to develop and dominant role in the development, but high residual oil for its high original oil content.
Resumo:
Recent measurements of local-area and wide-area traffic have shown that network traffic exhibits variability at a wide range of scales self-similarity. In this paper, we examine a mechanism that gives rise to self-similar network traffic and present some of its performance implications. The mechanism we study is the transfer of files or messages whose size is drawn from a heavy-tailed distribution. We examine its effects through detailed transport-level simulations of multiple TCP streams in an internetwork. First, we show that in a "realistic" client/server network environment i.e., one with bounded resources and coupling among traffic sources competing for resources the degree to which file sizes are heavy-tailed can directly determine the degree of traffic self-similarity at the link level. We show that this causal relationship is not significantly affected by changes in network resources (bottleneck bandwidth and buffer capacity), network topology, the influence of cross-traffic, or the distribution of interarrival times. Second, we show that properties of the transport layer play an important role in preserving and modulating this relationship. In particular, the reliable transmission and flow control mechanisms of TCP (Reno, Tahoe, or Vegas) serve to maintain the long-range dependency structure induced by heavy-tailed file size distributions. In contrast, if a non-flow-controlled and unreliable (UDP-based) transport protocol is used, the resulting traffic shows little self-similar characteristics: although still bursty at short time scales, it has little long-range dependence. If flow-controlled, unreliable transport is employed, the degree of traffic self-similarity is positively correlated with the degree of throttling at the source. Third, in exploring the relationship between file sizes, transport protocols, and self-similarity, we are also able to show some of the performance implications of self-similarity. We present data on the relationship between traffic self-similarity and network performance as captured by performance measures including packet loss rate, retransmission rate, and queueing delay. Increased self-similarity, as expected, results in degradation of performance. Queueing delay, in particular, exhibits a drastic increase with increasing self-similarity. Throughput-related measures such as packet loss and retransmission rate, however, increase only gradually with increasing traffic self-similarity as long as reliable, flow-controlled transport protocol is used.
Resumo:
The increased diversity of Internet application requirements has spurred recent interests in transport protocols with flexible transmission controls. In window-based congestion control schemes, increase rules determine how to probe available bandwidth, whereas decrease rules determine how to back off when losses due to congestion are detected. The parameterization of these control rules is done so as to ensure that the resulting protocol is TCP-friendly in terms of the relationship between throughput and loss rate. In this paper, we define a new spectrum of window-based congestion control algorithms that are TCP-friendly as well as TCP-compatible under RED. Contrary to previous memory-less controls, our algorithms utilize history information in their control rules. Our proposed algorithms have two salient features: (1) They enable a wider region of TCP-friendliness, and thus more flexibility in trading off among smoothness, aggressiveness, and responsiveness; and (2) they ensure a faster convergence to fairness under a wide range of system conditions. We demonstrate analytically and through extensive ns simulations the steady-state and transient behaviors of several instances of this new spectrum of algorithms. In particular, SIMD is one instance in which the congestion window is increased super-linearly with time since the detection of the last loss. Compared to recently proposed TCP-friendly AIMD and binomial algorithms, we demonstrate the superiority of SIMD in: (1) adapting to sudden increases in available bandwidth, while maintaining competitive smoothness and responsiveness; and (2) rapidly converging to fairness and efficiency.
Resumo:
A significant impediment to deployment of multicast services is the daunting technical complexity of developing, testing and validating congestion control protocols fit for wide-area deployment. Protocols such as pgmcc and TFMCC have recently made considerable progress on the single rate case, i.e. where one dynamic reception rate is maintained for all receivers in the session. However, these protocols have limited applicability, since scaling to session sizes beyond tens of participants necessitates the use of multiple rate protocols. Unfortunately, while existing multiple rate protocols exhibit better scalability, they are both less mature than single rate protocols and suffer from high complexity. We propose a new approach to multiple rate congestion control that leverages proven single rate congestion control methods by orchestrating an ensemble of independently controlled single rate sessions. We describe SMCC, a new multiple rate equation-based congestion control algorithm for layered multicast sessions that employs TFMCC as the primary underlying control mechanism for each layer. SMCC combines the benefits of TFMCC (smooth rate control, equation-based TCP friendliness) with the scalability and flexibility of multiple rates to provide a sound multiple rate multicast congestion control policy.
Resumo:
Twelve months of aerosol size distributions from 3 to 560nm, measured using scanning mobility particle sizers are presented with an emphasis on average number, surface, and volume distributions, and seasonal and diurnal variation. The measurements were made at the main sampling site of the Pittsburgh Air Quality Study from July 2001 to June 2002. These are supplemented with 5 months of size distribution data from 0.5 to 2.5μm measured with a TSI aerosol particle sizer and 2 months of size distributions measured at an upwind rural sampling site. Measurements at the main site were made continuously under both low and ambient relative humidity. The average Pittsburgh number concentration (3-500nm) is 22,000cm-3 with an average mode size of 40nm. Strong diurnal patterns in number concentrations are evident as a direct effect of the sources of particles (atmospheric nucleation, traffic, and other combustion sources). New particle formation from homogeneous nucleation is significant on 30-50% of study days and over a wide area (at least a hundred kilometers). Rural number concentrations are a factor of 2-3 lower (on average) than the urban values. Average measured distributions are different from model literature urban and rural size distributions. © 2004 Elsevier Ltd. All rights reserved.
Resumo:
Satellite remote sensing of ocean colour is the only method currently available for synoptically measuring wide-area properties of ocean ecosystems, such as phytoplankton chlorophyll biomass. Recently, a variety of bio-optical and ecological methods have been established that use satellite data to identify and differentiate between either phytoplankton functional types (PFTs) or phytoplankton size classes (PSCs). In this study, several of these techniques were evaluated against in situ observations to determine their ability to detect dominant phytoplankton size classes (micro-, nano- and picoplankton). The techniques are applied to a 10-year ocean-colour data series from the SeaWiFS satellite sensor and compared with in situ data (6504 samples) from a variety of locations in the global ocean. Results show that spectral-response, ecological and abundance-based approaches can all perform with similar accuracy. Detection of microplankton and picoplankton were generally better than detection of nanoplankton. Abundance-based approaches were shown to provide better spatial retrieval of PSCs. Individual model performance varied according to PSC, input satellite data sources and in situ validation data types. Uncertainty in the comparison procedure and data sources was considered. Improved availability of in situ observations would aid ongoing research in this field.
Resumo:
High level environmental screening study for offshore wind farm developments – marine habitats and species This report provides an awareness of the environmental issues related to marine habitats and species for developers and regulators of offshore wind farms. The information is also relevant to other offshore renewable energy developments. The marine habitats and species considered are those associated with the seabed, seabirds, and sea mammals. The report concludes that the following key ecological issues should be considered in the environmental assessment of offshore wind farms developments: • likely changes in benthic communities within the affected area and resultant indirect impacts on fish, populations and their predators such as seabirds and sea mammals; • potential changes to the hydrography and wave climate over a wide area, and potential changes to coastal processes and the ecology of the region; • likely effects on spawning or nursery areas of commercially important fish and shellfish species; • likely effects on mating and social behaviour in sea mammals, including migration routes; • likely effects on feeding water birds, seal pupping sites and damage of sensitive or important intertidal sites where cables come onshore; • potential displacement of fish, seabird and sea mammals from preferred habitats; • potential effects on species and habitats of marine natural heritage importance; • potential cumulative effects on seabirds, due to displacement of flight paths, and any mortality from bird strike, especially in sensitive rare or scarce species; • possible effects of electromagnetic fields on feeding behaviour and migration, especially in sharks and rays, and • potential marine conservation and biodiversity benefits of offshore wind farm developments as artificial reefs and 'no-take' zones. The report provides an especially detailed assessment of likely sensitivity of seabed species and habitats in the proposed development areas. Although sensitive to some of the factors created by wind farm developments, they mainly have a high recovery potential. The way in which survey data can be linked to Marine Life Information Network (MarLIN) sensitivity assessments to produce maps of sensitivity to factors is demonstrated. Assessing change to marine habitats and species as a result of wind farm developments has to take account of the natural variability of marine habitats, which might be high especially in shallow sediment biotopes. There are several reasons for such changes but physical disturbance of habitats and short-term climatic variability are likely to be especially important. Wind farm structures themselves will attract marine species including those that are attached to the towers and scour protection, fish that associate with offshore structures, and sea birds (especially sea duck) that may find food and shelter there. Nature conservation designations especially relevant to areas where wind farm might be developed are described and the larger areas are mapped. There are few designated sites that extend offshore to where wind farms are likely to be developed. However, cable routes and landfalls may especially impinge on designated sites. The criteria that have been developed to assess the likely marine natural heritage importance of a location or of the habitats and species that occur there can be applied to survey information to assess whether or not there is anything of particular marine natural heritage importance in a development area. A decision tree is presented that can be used to apply ‘duty of care’ principles to any proposed development. The potential ‘gains’ for the local environment are explored. Wind farms will enhance the biodiversity of areas, could act as refugia for fish, and could be developed in a way that encourages enhancement of fish stocks including shellfish.
Resumo:
Despite increased research over the last decade, diversity patterns in Antarctic deep-sea benthic taxa and their driving forces are only marginally known. Depth-related patterns of diversity and distribution of isopods and bivalves collected in the Atlantic sector of the Southern Ocean are analysed. The data, sampled by epibenthic sledge at 40 deep-sea stations from the upper continental slope to the hadal zone (774 – 6348 m) over a wide area of the Southern Ocean, comprises 619 species of isopods and 81 species of bivalves,. There were more species of isopods than bivalves in all samples, and species per station varied from 2 to 85 for isopods and from 0 to 18 for bivalves. Most species were rare, with 72% of isopod species restricted to one or two stations, and 45% of bivalves. Among less-rare species bivalves tended to have wider distributions than isopods. The species richness of isopods varied with depth, showing a weak unimodal curve with a peak at 2000 – 4000 m, while the richness of bivalves did not. Multivariate analyses indicate that there are two main assemblages in the Southern Ocean, one shallow and one deep. These overlap over a large depth-range (2000 – 4000 m). Comparing analyses based on the Sørensen resemblance measure (presence/absence) and Γ+ (presence/absence incorporating relatedness among species) indicates that rare species tend to have other closely related species within the same depth band. Analysis of relatedness among species indicates that the taxonomic variety of bivalves tends to decline at depth, whereas that of isopods is maintained. This, it is speculated, may indicate that the available energy at depth is insufficient to maintain a range of bivalve life-history strategies
Resumo:
A service is a remote computational facility which is made available for general use by means of a wide-area network. Several types of service arise in practice: stateless services, shared state services and services with states which are customised for individual users. A service-based orchestration is a multi-threaded computation which invokes remote services in order to deliver results back to a user (publication). In this paper a means of specifying services and reasoning about the correctness of orchestrations over stateless services is presented. As web services are potentially unreliable the termination of even finite orchestrations cannot be guaranteed. For this reason a partial-correctness powerdomain approach is proposed to capture the semantics of recursive orchestrations.
Resumo:
The POINT-AGAPE (Pixel-lensing Observations with the Isaac Newton Telescope-Andromeda Galaxy Amplified Pixels Experiment) survey is an optical search for gravitational microlensing events towards the Andromeda galaxy (M31). As well as microlensing, the survey is sensitive to many different classes of variable stars and transients. Here we describe the automated detection and selection pipeline used to identify M31 classical novae (CNe) and we present the resulting catalogue of 20 CN candidates observed over three seasons. CNe are observed both in the bulge region as well as over a wide area of the M31 disc. Nine of the CNe are caught during the final rise phase and all are well sampled in at least two colours. The excellent light-curve coverage has allowed us to detect and classify CNe over a wide range of speed class, from very fast to very slow. Among the light curves is a moderately fast CN exhibiting entry into a deep transition minimum, followed by its final decline. We have also observed in detail a very slow CN which faded by only 0.01 mag d(-1) over a 150-d period. We detect other interesting variable objects, including one of the longest period and most luminous Mira variables. The CN catalogue constitutes a uniquely well-sampled and objectively-selected data set with which to study the statistical properties of CNe in M31, such as the global nova rate, the reliability of novae as standard-candle distance indicators and the dependence of the nova population on stellar environment. The findings of this statistical study will be reported in a follow-up paper.
Resumo:
The mean velocity and turbulence intensity are the two main inputs to investigate the ship propeller induced seabed scouring resulting from a vessel is manoeuvring within a port where the underkeel clearances are low. More accurate data including the turbulence intensity is now available by using the laser doppler anemometry (LDA) measurement system and computational fluid dynamics (CFD) approach. Turbulence intensity has a loose definition, which is the velocity fluctuation as the root mean square (RMS) referenced to a mean flow velocity. However, the velocity fluctuation and mean velocity can be the overall value includingx, y and z directions or the value of a single component. LDA and CFD results were obtained from two different acquisition systems (Dantec LDA system and Fluent CFD package) and therefore the outputs cannot be compared directly. An effective method is proposed for comparing the turbulence intensity between the experimental measurements and the computational predictions within a ship propeller jet. The flow patterns of turbulence intensity within a ship propeller jet are presented by using the LDA measurements and CFD results from turbulence models of standard k-e, RNG k-e, realizable k–e, standard k–?, SST k–?and Reynolds stresses.
Resumo:
The concentration of organic acids in anaerobic digesters is one of the most critical parameters for monitoring and advanced control of anaerobic digestion processes. Thus, a reliable online-measurement system is absolutely necessary. A novel approach to obtaining these measurements indirectly and online using UV/vis spectroscopic probes, in conjunction with powerful pattern recognition methods, is presented in this paper. An UV/vis spectroscopic probe from S::CAN is used in combination with a custom-built dilution system to monitor the absorption of fully fermented sludge at a spectrum from 200 to 750 nm. Advanced pattern recognition methods are then used to map the non-linear relationship between measured absorption spectra to laboratory measurements of organic acid concentrations. Linear discriminant analysis, generalized discriminant analysis (GerDA), support vector machines (SVM), relevance vector machines, random forest and neural networks are investigated for this purpose and their performance compared. To validate the approach, online measurements have been taken at a full-scale 1.3-MW industrial biogas plant. Results show that whereas some of the methods considered do not yield satisfactory results, accurate prediction of organic acid concentration ranges can be obtained with both GerDA and SVM-based classifiers, with classification rates in excess of 87% achieved on test data.