922 resultados para Data Streams Distribution


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The paper describes marine fish resources living in Mozambican waters. Resource data on distribution areas, reproduction, age, growth and stock size are described. Actual price of the commercially exploited stocks are also given. The problems involving the assessment of catch areas are discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Scads (Decapterus russellii, D. macrosoma, Selar crumenophthalmus), Indian mackerel (Rastrelliger kanagurta) and horse mackerel (Trachurus trachurus) are the main pelagic species caught in a bottom trawl fishery at Sofala Bank and at Boa Paz. Information on catch and effort is presented together with available data on distribution, spawning, size at first maturity, growth, mortality and biomass of the species. As the present catch and fishing mortality are low compared to the estimates of biomass and total mortality it is concluded that the fishery may be expanded further in the future.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A 1.55 mu m InGaAsP-InP partly gain-coupled two-section DFB self-pulsation laser (SPL) with a varied ridge width has been fabricated. The laser produces self-pulsations with a frequency tuning range of more than 135 GHz. All-optical clock recovery from 40 Gb/s degraded data streams has been demonstrated. Successful lockings of the device at frequencies of 30 GHz, 40 GHz, 50 GHz, and 60 GHz to a 10 GHz sidemode injection are also conducted, which demonstrates the capability of the device for all-optical clock recovery at different frequencies. This flexibility of the device is highly desired for practical uses. Crown Copyright

Relevância:

80.00% 80.00%

Publicador:

Resumo:

数据流是近年出现的一个新的应用类型,具有连续、无限、高速等特点。典型的数据流包括:无线传感器网络应用环境中由传感器传回的各种监测数据、股票交易所的股票价格信息、网络监测系统与道路交通监测系统的监测数据、电信部门的通话记录数据,以及网站的日志信息等。数据流的出现对传统的数据管理和挖掘技术提出了巨大的挑战。传统的数据挖掘技术往往对静态数据集合做多遍扫描,其时间和空间复杂度均较高,难以直接应用到数据流环境中。本文对数据流上的频繁项集挖掘问题做了深入研究,主要研究内容和创新性成果概述如下: 本文首先对频繁项集挖掘问题做了一个全面的综述。综述部分先对静态数据集上的频繁项集挖掘的概念、分类、经典算法等相关研究做全面的介绍,然后分析了在数据流上进行频繁项集挖掘面临的问题和挑战、以及研究现状。 针对数据流上的频繁元素挖掘问题,本文提出了一个简单而高效的算法,挖掘数据流滑动窗口上的频繁元素。算法既可以定期返回满足ε-近似要求的频繁元素,也可以响应用户在任意时间提交的请求,返回满足误差要求的结果。 针对数据流上的频繁项集挖掘问题,本文提出了BFI-Stream算法,挖掘数据流滑动窗口上的所有频繁项集,实时返回精确结果。该算法使用前缀树数据结构,并且在创建和更新过程中裁剪了一部分非频繁节点,因此算法的空间和时间复杂度都较低。 接着,本文针对现有的在数据流上挖掘频繁项集的算法存在维护过多非频繁项集而导致使用空间过大的问题,提出了一种乐观裁剪方法,大大降低了算法的空间复杂度。文中先对实际数据集分析了项集的频率分布情况,提出了乐观裁剪方法,裁剪大部分非频繁项集;实验结果表明乐观裁剪方法不仅大大降低了内存使用量,还提高了算法的更新效率。 再次,本文针对用户指定最小支持度和允许误差的近似查询,提出了在数据流滑动窗口上挖掘频繁项集的近似算法AFI-Stream,返回满足误差要求的结果。AFI-Stream仅仅维护频繁项集,不维护非频繁项集,因此能大大降低算法使用的内存。 为了满足在数据流上挖掘频繁项集研究的需要,本文设计并开发了一个数据流频繁项集挖掘原型系统StreamMiner,进行相关算法的评测和研究。

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In order to discover the distribution law of the remaining oil, the paper focuses on the quantitative characterization of the reservoir heterogeneity and the distribution law of the fluid barrier and interbed, based on fine geological study of the reservoir in Liuhuall-1 oil field. The refined quantitative reservoir geological model has been established by means of the study of core analysis, logging evaluation on vertical well and parallel well, and seismic interpretation and prediction. Utilizing a comprehensive technology combining dynamic data with static data, the distribution characteristics, formation condition and controlling factors of remaining oil in Liuhuall-1 oil field have been illustrated. The study plays an important role in the enrichment regions of the remaining oil and gives scientific direction for the next development of the remaining oil. Several achievements have been obtained as follows: l.On the basis of the study of reservoir division and correlation,eight lithohorizons (layer A, B_1, B_2, B_3, C, D, E, and F) from the top to the bottom of the reservoir are discriminated. The reef facies is subdivided into reef-core facies, fore-reef facies and backreef facies. These three subfacies are further subdivided into five microfacies: coral algal limestone, coralgal micrite, coral algal clastic limestone, bioclastic limestone and foraminiferal limestone. In order to illustrate the distribution law of remaining oil in high watercut period, the stratigraphic structure model and sedimentary model are reconstructed. 2.1n order to research intra-layer, inter-layer and plane reservoir heterogeneity, a new method to characterize reservoir heterogeneity by using IRH (Index of Reservoir Heterogeneity) is introduced. The result indicates that reservoir heterogeneity is medium in layer B_1 and B_3, hard in layer A, B_2, C, E, poor in layer D. 3.Based on the study of the distribution law of fluid barrier and interbed, the effect of fluid battier and interbed on fluid seepage is revealed. Fluid barrier and interbed is abundant in layer A, which control the distribution of crude oil in reservoir. Fluid barrier and interbed is abundant relatively in layer B_2,C and E, which control the spill movement of the bottom water. Layer B_1, B_3 and D tend to be waterflooded due to fluid barrier and interbed is poor. 4.Based on the analysis of reservoir heterogeneity, fluid barrier and interbed and the distribution of bottom water, four contributing regions are discovered. The main lies on the north of well LH11-1A. Two minors lie on the east of well LH11-1-3 and between well LH11-1-3 and well LH11-1-5. The last one lies in layer E in which the interbed is discontinuous. 5.The parameters of reservoir and fluid are obtained recurring to core analysis, logging evaluation on vertical well and parallel well, and seismic interpretation and prediction. Theses parameters provide data for the quantitative characterization of the reservoir heterogeneity and the distribution law of the fluid barrier and interbed. 6.1n the paper, an integrated method about the distribution prediction of remaining oil is put forward on basis of refined reservoir geological model and reservoir numerical simulation. The precision in history match and prediction of remaining oil is improved greatly. The integrated study embodies latest trend in this research field. 7.It is shown that the enrichment of the remaining oil with high watercut in Liuhua 11-1 oil field is influenced by reservoir heterogeneity, fluid barrier and interbed, sealing property of fault, driving manner of bottom water and exploitation manner of parallel well. 8.Using microfacies, IRH, reservoir structure, effective thickness, physical property of reservoir, distribution of fluid barrier and interbed, the analysis of oil and water movement and production data, twelve new sidetracked holes are proposed and demonstrated. The result is favorable to instruct oil field development and have gotten a good effect.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes an algorithm for scheduling packets in real-time multimedia data streams. Common to these classes of data streams are service constraints in terms of bandwidth and delay. However, it is typical for real-time multimedia streams to tolerate bounded delay variations and, in some cases, finite losses of packets. We have therefore developed a scheduling algorithm that assumes streams have window-constraints on groups of consecutive packet deadlines. A window-constraint defines the number of packet deadlines that can be missed in a window of deadlines for consecutive packets in a stream. Our algorithm, called Dynamic Window-Constrained Scheduling (DWCS), attempts to guarantee no more than x out of a window of y deadlines are missed for consecutive packets in real-time and multimedia streams. Using DWCS, the delay of service to real-time streams is bounded even when the scheduler is overloaded. Moreover, DWCS is capable of ensuring independent delay bounds on streams, while at the same time guaranteeing minimum bandwidth utilizations over tunable and finite windows of time. We show the conditions under which the total demand for link bandwidth by a set of real-time (i.e., window-constrained) streams can exceed 100% and still ensure all window-constraints are met. In fact, we show how it is possible to guarantee worst-case per-stream bandwidth and delay constraints while utilizing all available link capacity. Finally, we show how best-effort packets can be serviced with fast response time, in the presence of window-constrained traffic.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Current low-level networking abstractions on modern operating systems are commonly implemented in the kernel to provide sufficient performance for general purpose applications. However, it is desirable for high performance applications to have more control over the networking subsystem to support optimizations for their specific needs. One approach is to allow networking services to be implemented at user-level. Unfortunately, this typically incurs costs due to scheduling overheads and unnecessary data copying via the kernel. In this paper, we describe a method to implement efficient application-specific network service extensions at user-level, that removes the cost of scheduling and provides protected access to lower-level system abstractions. We present a networking implementation that, with minor modifications to the Linux kernel, passes data between "sandboxed" extensions and the Ethernet device without copying or processing in the kernel. Using this mechanism, we put a customizable networking stack into a user-level sandbox and show how it can be used to efficiently process and forward data via proxies, or intermediate hosts, in the communication path of high performance data streams. Unlike other user-level networking implementations, our method makes no special hardware requirements to avoid unnecessary data copies. Results show that we achieve a substantial increase in throughput over comparable user-space methods using our networking stack implementation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The high-intensity, high-resolution x-ray source at the European Synchrotron Radiation Facility (ESRF) has been used in x-ray diffraction (XRD) experiments to detect intermetallic compounds (IMCs) in lead-free solder bumps. The IMCs found in 95.5Sn3.8Ag0.7Cu solder bumps on Cu pads with electroplated-nickel immersion-gold (ENIG) surface finish are consistent with results based on traditional destructive methods. Moreover, after positive identification of the IMCs from the diffraction data, spatial distribution plots over the entire bump were obtained. These spatial distributions for selected intermetallic phases display the layer thickness and confirm the locations of the IMCs. For isothermally aged solder samples, results have shown that much thicker layers of IMCs have grown from the pad interface into the bulk of the solder. Additionally, the XRD technique has also been used in a temperature-resolved mode to observe the formation of IMCs, in situ, during the solidification of the solder joint. The results demonstrate that the XRD technique is very attractive as it allows for nondestructive investigations to be performed on expensive state-of-the-art electronic components, thereby allowing new, lead-free materials to be fully characterized.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Johnson's SB and the logit-logistic are four-parameter distribution models that may be obtained from the standard normal and logistic distributions by a four-parameter transformation. For relatively small data sets, such as diameter at breast height measurements obtained from typical sample plots, distribution models with four or less parameters have been found to be empirically adequate. However, in situations in which the distributions are complex, for example in mixed stands or when the stand has been thinned or when working with aggregated data, then distribution models with more shape parameters may prove to be necessary. By replacing the symmetric standard logistic distribution of the logit-logistic with a one-parameter “standard Richards” distribution and transforming by a five-parameter Richards function, we obtain a new six-parameter distribution model, the “Richit-Richards”. The Richit-Richards includes the “logit-Richards”, the “Richit-logistic”, and the logit-logistic as submodels. Maximum likelihood estimation is used to fit the model, and some problems in the maximum likelihood estimation of bounding parameters are discussed. An empirical case study of the Richit-Richards and its submodels is conducted on pooled diameter at breast height data from 107 sample plots of Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.). It is found that the new models provide significantly better fits than the four-parameter logit-logistic for large data sets.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A power and resource efficient ‘dynamic-range utilisation’ technique to increase operational capacity of DSP IP cores by exploiting redundancy in the data epresentation of sampled analogue input data, is presented. By cleverly partitioning dynamic-range into separable processing threads, several data streams are computed concurrently on the same hardware. Unlike existing techniques which act solely to reduce power consumption due to sign extension, here the dynamic range is exploited to increase operational capacity while still achieving reduced power consumption. This extends an existing system-level, power efficient framework for the design of low power DSP IP cores, which when applied to the design of an FFT IP core in a digital receiver system gives an architecture requiring 50% fewer multipliers, 12% fewer slices and 51%-56% less power.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In a Bayesian learning setting, the posterior distribution of a predictive model arises from a trade-off between its prior distribution and the conditional likelihood of observed data. Such distribution functions usually rely on additional hyperparameters which need to be tuned in order to achieve optimum predictive performance; this operation can be efficiently performed in an Empirical Bayes fashion by maximizing the posterior marginal likelihood of the observed data. Since the score function of this optimization problem is in general characterized by the presence of local optima, it is necessary to resort to global optimization strategies, which require a large number of function evaluations. Given that the evaluation is usually computationally intensive and badly scaled with respect to the dataset size, the maximum number of observations that can be treated simultaneously is quite limited. In this paper, we consider the case of hyperparameter tuning in Gaussian process regression. A straightforward implementation of the posterior log-likelihood for this model requires O(N^3) operations for every iteration of the optimization procedure, where N is the number of examples in the input dataset. We derive a novel set of identities that allow, after an initial overhead of O(N^3), the evaluation of the score function, as well as the Jacobian and Hessian matrices, in O(N) operations. We prove how the proposed identities, that follow from the eigendecomposition of the kernel matrix, yield a reduction of several orders of magnitude in the computation time for the hyperparameter optimization problem. Notably, the proposed solution provides computational advantages even with respect to state of the art approximations that rely on sparse kernel matrices.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An orthogonal vector approach is proposed for the synthesis of multi-beam directional modulation (DM) transmitters. These systems have the capability of concurrently projecting independent data streams into different specified spatial directions while simultaneously distorting signal constellations in all other directions. Simulated bit error rate (BER) spatial distributions are presented for various multi-beam system configurations in order to illustrate representative examples of physical layer security performance enhancement that can be achieved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A 10 GHz Fourier Rotman lens enabled dynamic directional modulation (DM) transmitter is experimentally evaluated. Bit error rate (BER) performance is obtained via real-time data transmission. It is shown that Fourier Rotman DM functionality enhances system security performance in terms of narrower decodable low BER region and higher BER values associated with BER sidelobes especially under high signal to noise ratio (SNR) scenarios. This enhancement is achieved by controlled corruption of constellation diagrams in IQ space by orthogonal injection of interference. Furthermore, the paper gives the first report of a functional dual-beam DM transmitter, which has the capability of simultaneously projecting two independent data streams into two different spatial directions while simultaneously scrambling the information signals along all other directions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Compensation for the dynamic response of a temperature sensor usually involves the estimation of its input on the basis of the measured output and model parameters. In the case of temperature measurement, the sensor dynamic response is strongly dependent on the measurement environment and fluid velocity. Estimation of time-varying sensor model parameters therefore requires continuous textit{in situ} identification. This can be achieved by employing two sensors with different dynamic properties, and exploiting structural redundancy to deduce the sensor models from the resulting data streams. Most existing approaches to this problem assume first-order sensor dynamics. In practice, however second-order models are more reflective of the dynamics of real temperature sensors, particularly when they are encased in a protective sheath. As such, this paper presents a novel difference equation approach to solving the blind identification problem for sensors with second-order models. The approach is based on estimating an auxiliary ARX model whose parameters are related to the desired sensor model parameters through a set of coupled non-linear algebraic equations. The ARX model can be estimated using conventional system identification techniques and the non-linear equations can be solved analytically to yield estimates of the sensor models. Simulation results are presented to demonstrate the efficiency of the proposed approach under various input and parameter conditions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Efficient identification and follow-up of astronomical transients is hindered by the need for humans to manually select promising candidates from data streams that contain many false positives. These artefacts arise in the difference images that are produced by most major ground-based time-domain surveys with large format CCD cameras. This dependence on humans to reject bogus detections is unsustainable for next generation all-sky surveys and significant effort is now being invested to solve the problem computationally. In this paper, we explore a simple machine learning approach to real-bogus classification by constructing a training set from the image data of similar to 32 000 real astrophysical transients and bogus detections from the Pan-STARRS1 Medium Deep Survey. We derive our feature representation from the pixel intensity values of a 20 x 20 pixel stamp around the centre of the candidates. This differs from previous work in that it works directly on the pixels rather than catalogued domain knowledge for feature design or selection. Three machine learning algorithms are trained (artificial neural networks, support vector machines and random forests) and their performances are tested on a held-out subset of 25 per cent of the training data. We find the best results from the random forest classifier and demonstrate that by accepting a false positive rate of 1 per cent, the classifier initially suggests a missed detection rate of around 10 per cent. However, we also find that a combination of bright star variability, nuclear transients and uncertainty in human labelling means that our best estimate of the missed detection rate is approximately 6 per cent.