994 resultados para Over sampling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This contribution proposes a novel probability density function (PDF) estimation based over-sampling (PDFOS) approach for two-class imbalanced classification problems. The classical Parzen-window kernel function is adopted to estimate the PDF of the positive class. Then according to the estimated PDF, synthetic instances are generated as the additional training data. The essential concept is to re-balance the class distribution of the original imbalanced data set under the principle that synthetic data sample follows the same statistical properties. Based on the over-sampled training data, the radial basis function (RBF) classifier is constructed by applying the orthogonal forward selection procedure, in which the classifier’s structure and the parameters of RBF kernels are determined using a particle swarm optimisation algorithm based on the criterion of minimising the leave-one-out misclassification rate. The effectiveness of the proposed PDFOS approach is demonstrated by the empirical study on several imbalanced data sets.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A problem with use of the geostatistical Kriging error for optimal sampling design is that the design does not adapt locally to the character of spatial variation. This is because a stationary variogram or covariance function is a parameter of the geostatistical model. The objective of this paper was to investigate the utility of non-stationary geostatistics for optimal sampling design. First, a contour data set of Wiltshire was split into 25 equal sub-regions and a local variogram was predicted for each. These variograms were fitted with models and the coefficients used in Kriging to select optimal sample spacings for each sub-region. Large differences existed between the designs for the whole region (based on the global variogram) and for the sub-regions (based on the local variograms). Second, a segmentation approach was used to divide a digital terrain model into separate segments. Segment-based variograms were predicted and fitted with models. Optimal sample spacings were then determined for the whole region and for the sub-regions. It was demonstrated that the global design was inadequate, grossly over-sampling some segments while under-sampling others.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Research cruises were conducted in August-October 2007 to complete the third annual remotely operated vehicle (ROV)-based assessments of nearshore rocky bottom finfish at ten sites in the northern Channel Islands. Annual surveys at the Channel Islands have been conducted since 2004 at four sites and were expanded to ten sites in 2005 to monitor potential marine protected area (MPA)effects on baseline fish density. Six of the ten sites are in MPAs and four in nearby fished reference areas. In 2007 the amount of soft-only substrate on the 141 track lines surveyed was again estimated in real-time in order to target rocky bottom habitat. These real-time estimates of hard and mixed substrate for all ten sites averaged 57%, 1% more than the post-processed average of 56%. Surveys generated 69.9 km of usable video for use in finfish density calculations, with target rocky bottom habitat accounting for 56% (39.1 km) for all sites combined. The amount of rocky habitat sampled by site averaged 3.8 km and ranged from 3.3 km sampled at South Point, a State Marine Reserve (SMR) off Santa Rosa Island, to 4.7 km at Anacapa Island SMR. A sampling goal of 75 transects at all 10 sites was met using real-time habitat estimates combined with precautionary over-sampling by 10%. A total of seventy kilometers of sampling is projected to produce at least seventy-five 100 m2 transects per site. Thirteen of 26 finfish taxa observed were selected for quantitative evaluation over the time series based on a minimum criterion of abundance (0.05/100 m2). Ten of these 13 finfish appear to be more abundant at the state marine reserves relative to fished areas when densities were averaged across the 2005 to 2007 period. One of the species that appears to be more abundant in fished areas was señorita, a relatively small prey species that is not a commercial or recreational target. (PDF contains 83 pages.)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

随着研究工作的逐步深入,目前已经利用经典热光源实现了关联衍射成像,使得该技术有望在X射线以及中子衍射成像等方面得到广泛应用。在实验利用非相干光得到物体无透镜傅里叶变换频谱的基础上,采用误差消除与输入输出恢复算法,并结合过采样理论,实现了实验所用物体透射率函数的恢复。分别得到了纯振幅物体的振幅分布函数与纯相位物体的相位分布函数。此外,还讨论了实验所得傅里叶变换频谱的噪声等因素对图像恢复结果的影响。

Relevância:

60.00% 60.00%

Publicador:

Resumo:

For most fisheries applications, the shape of a length-frequency distribution is much more important than its mean length or variance. This makes it difficult to evaluate at which point a sample size is adequate. By estimating the coefficient of variation of the counts in each length class and taking a weighted mean of these, a measure of precision was obtained that takes the precision in all length classes into account. The precision estimates were closely associated with the ratio of the sample size to the number of size classes in each sample. As a rule-of-thumb, a minimum sample size of 10 times the number of length classes in the sample is suggested because the precision deteriorates rapidly for smaller sample sizes. In absence of such a rule-of-thumb, samplers have previously under-estimated the required sample size for samples with large fish, while over-sampling small fish of the same species.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Our motherland has large area of maritime space. Searching and developing ocean becomes more and more important. So Ocean Bottom Seismometer (OBS) as an absolutely necessary equipment can be used in many oceanic fields. OBS not only is an important instrument for discovering structure of lithosphere of ocean bottom, but also plays a main role of oceanic geophysical exploration. The paper introduces my relational work. The MCI micro-power broad frequency seismometer was developed independently. Its power dissipation is less than 300mW. It has some merits including miniature volumeN light mass and cheap price. It is an ideal device not only for the collection high-resolution natural seismic data, but also for the fields of seismic sounding and engineering seismology. Many new high technique were applied to develop this instrument including over-sampling A/D converter, high performance 32bit Micro Process Unit and Flash memory with smart-media interface. Base on the achievement, I have accomplished the showpiece of OBS, which is applied to the deepwater oil and gas geophysical exploration. Because of micro-power dissipation, the seismograph and the sonar releaser can be integrated into a sphere cabin. By this means, the instrument's frequency of resonance and frequency of couple are improved obviously. The data acquisition system of OBS is improved from MCI seismometer. The capacity of flash memory is enlarge from 1G bytes to 8G bytes. The advance MPU in data acquisition system is used to integrate other function modules such as sonar, GPS, compass and digital transmitter.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Wireless sensor networks have recently emerged as enablers of important applications such as environmental, chemical and nuclear sensing systems. Such applications have sophisticated spatial-temporal semantics that set them aside from traditional wireless networks. For example, the computation of temperature averaged over the sensor field must take into account local densities. This is crucial since otherwise the estimated average temperature can be biased by over-sampling areas where a lot more sensors exist. Thus, we envision that a fundamental service that a wireless sensor network should provide is that of estimating local densities. In this paper, we propose a lightweight probabilistic density inference protocol, we call DIP, which allows each sensor node to implicitly estimate its neighborhood size without the explicit exchange of node identifiers as in existing density discovery schemes. The theoretical basis of DIP is a probabilistic analysis which gives the relationship between the number of sensor nodes contending in the neighborhood of a node and the level of contention measured by that node. Extensive simulations confirm the premise of DIP: it can provide statistically reliable and accurate estimates of local density at a very low energy cost and constant running time. We demonstrate how applications could be built on top of our DIP-based service by computing density-unbiased statistics from estimated local densities.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This programme of research used a developmental psychopathology approach to investigate females across the adolescent period. A two-sided story is presented; first, a study of neuroendocrine and psychosocial parameters in a group of healthy female adolescents (N = 63), followed by a parallel study of female adolescents with anorexia nervosa (AN) (N = 8). A biopsychosocial, multi-method measurement approach was taken, which utilised self-report, interview and hypothalamic-pituitary-adrenocortical (HPA) axis measures. Saliva samples for the measurement of cortisol and DHEA were collected using the best-recommended methodology: multiple samples over the day, strict reference to time of awakening, and two consecutive sampling weekdays. The research was adolescent-orientated: specifically, by using creative and ageappropriate strategies to ensure participant adherence to protocol, as well as more generally by adopting various procedures to facilitate engagement with the research process. In the healthy females mean (± SD) age 13.9 (± 2.7) years, cortisol and DHEA secretion exhibited typical adult-like diurnal patterns. Developmental markers of chronological age, menarche status and body mass index (BMI) had differential associations with cortisol and DHEA secretory activity. The pattern of the cortisol awakening response (CAR) was sensitive to whether participants had experienced first menses, but not to chronological age or BMI. Those who were post-menarche generally reached their peak point of cortisol secretion at 45 minutes post-awakening, in contrast to the pre-menarche group who were more evenly spread. Subsequent daytime cortisol levels were also higher in post-menarche females, and this effect was also noted for increasing age and BMI. Both morning and evening DHEA were positively associated with developmental markers. None of the situational or self-report psychosocial variables that were measured modulated any of the key findings regarding cortisol and DHEA secretion. The healthy group of girls were within age-appropriate norms for all the self-report measures used, however just under half of this group were insecurely attached (as assessed by interview). Only attachment style was associated with neuroendocrine parameters. In particular, those with an anxious insecure style exhibited a higher awakening sample (levels were 7.16 nmol/l, 10.40 nmol/l and 7.93 nmol/l for secure, anxious and avoidant groups, respectively) and a flatter CAR (mean increases over the awakening period were 6.38 nmol/l, 2.32 nmol/l and 8.61 nmol/l for secure, anxious and avoidant groups, respectively). The afore-mentioned pattern is similar to that consistently associated with psychological disorder in adults, and so this may be a pre-clinical vulnerability factor for subsequent mental health problems. A group of females with AN, mean (± SD) age 15.1 (± 1.6) years, were recruited from a specialist residential clinic and compared to the above group of healthy control (HC) female adolescents. A general picture of cortisol and DHEA hypersecretion was revealed in those with AN. The mean (± SD) change exhibited in cortisol levels over the 30 minute post-awakening period was 7.05 nmol/l (± 5.99) and 8.33 nmol/l (± 6.41) for HC and AN groups, respectively. The mean (± SD) evening cortisol level for the HC girls was 1.95 nmol/l (± 2.11), in comparison to 6.42 nmol/l (± 11.10) for the AN group. Mean (± SD) morning DHEA concentrations were 1.47 nmol/l (± 0.85) and 2.25 nmol/l (± 0.88) for HC and AN groups, respectively. The HC group’s mean (± SD) concentration of 12 hour DHEA was 0.55 nmol/l (± 0.46) and the AN group’s mean level was 0.89 nmol/l (± 0.90). This adrenal steroid hypersecretion evidenced by the AN group was not associated with BMI or eating disorder symptomatology. Insecure attachment characterised by fearfulness and anger was most apparent; a style which was unparalleled in the healthy group of female adolescents. The causal directions of the AN group findings remain unclear. Examining some of the participants with AN as case studies one year post-discharge from the clinic illustrated that for one participant who was recovered, in terms of returning to ordinary school life and no longer exhibiting clinical levels of eating disorder symptomatology, her CARs were no longer inconsistent over sampling days and her DHEA levels were also now generally comparable to the healthy control group. For another participant who had not recovered from her AN one year later, the profile of her CAR continued to be inconsistent over sampling days and her DHEA concentrations over the diurnal period were significantly higher in comparison to the healthy control group. In its entirety, this work’s unique contribution lies in its consideration of methodological and developmental issues specifically pertaining to adolescents. Findings also contribute to knowledge of AN and understanding of vulnerability factors, and how these may be used to develop interventions dedicated to improving adolescent health.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Contexte. Les études cas-témoins sont très fréquemment utilisées par les épidémiologistes pour évaluer l’impact de certaines expositions sur une maladie particulière. Ces expositions peuvent être représentées par plusieurs variables dépendant du temps, et de nouvelles méthodes sont nécessaires pour estimer de manière précise leurs effets. En effet, la régression logistique qui est la méthode conventionnelle pour analyser les données cas-témoins ne tient pas directement compte des changements de valeurs des covariables au cours du temps. Par opposition, les méthodes d’analyse des données de survie telles que le modèle de Cox à risques instantanés proportionnels peuvent directement incorporer des covariables dépendant du temps représentant les histoires individuelles d’exposition. Cependant, cela nécessite de manipuler les ensembles de sujets à risque avec précaution à cause du sur-échantillonnage des cas, en comparaison avec les témoins, dans les études cas-témoins. Comme montré dans une étude de simulation précédente, la définition optimale des ensembles de sujets à risque pour l’analyse des données cas-témoins reste encore à être élucidée, et à être étudiée dans le cas des variables dépendant du temps. Objectif: L’objectif général est de proposer et d’étudier de nouvelles versions du modèle de Cox pour estimer l’impact d’expositions variant dans le temps dans les études cas-témoins, et de les appliquer à des données réelles cas-témoins sur le cancer du poumon et le tabac. Méthodes. J’ai identifié de nouvelles définitions d’ensemble de sujets à risque, potentiellement optimales (le Weighted Cox model and le Simple weighted Cox model), dans lesquelles différentes pondérations ont été affectées aux cas et aux témoins, afin de refléter les proportions de cas et de non cas dans la population source. Les propriétés des estimateurs des effets d’exposition ont été étudiées par simulation. Différents aspects d’exposition ont été générés (intensité, durée, valeur cumulée d’exposition). Les données cas-témoins générées ont été ensuite analysées avec différentes versions du modèle de Cox, incluant les définitions anciennes et nouvelles des ensembles de sujets à risque, ainsi qu’avec la régression logistique conventionnelle, à des fins de comparaison. Les différents modèles de régression ont ensuite été appliqués sur des données réelles cas-témoins sur le cancer du poumon. Les estimations des effets de différentes variables de tabac, obtenues avec les différentes méthodes, ont été comparées entre elles, et comparées aux résultats des simulations. Résultats. Les résultats des simulations montrent que les estimations des nouveaux modèles de Cox pondérés proposés, surtout celles du Weighted Cox model, sont bien moins biaisées que les estimations des modèles de Cox existants qui incluent ou excluent simplement les futurs cas de chaque ensemble de sujets à risque. De plus, les estimations du Weighted Cox model étaient légèrement, mais systématiquement, moins biaisées que celles de la régression logistique. L’application aux données réelles montre de plus grandes différences entre les estimations de la régression logistique et des modèles de Cox pondérés, pour quelques variables de tabac dépendant du temps. Conclusions. Les résultats suggèrent que le nouveau modèle de Cox pondéré propose pourrait être une alternative intéressante au modèle de régression logistique, pour estimer les effets d’expositions dépendant du temps dans les études cas-témoins

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The demand for new telecommunication services requiring higher capacities, data rates and different operating modes have motivated the development of new generation multi-standard wireless transceivers. In multistandard design, sigma-delta based ADC is one of the most popular choices. To this end, in this paper we present cascaded 2-2-2 reconfigurable sigma-delta modulator that can handle GSM, WCDMA and WLAN standards. The modulator makes use of a low-distortion swing suppression topology which is highly suitable for wide band applications. In GSM mode, only the first stage (2nd order Σ-Δ ADC) is used to achieve a peak SNDR of 88dB with oversampling ratio of 160 for a bandwidth of 200KHz and for WCDMA mode a 2-2 cascaded structure (4th order) is turned on with 1-bit in the first stage and 2-bit in the second stage to achieve 74 dB peak SNDR with over-sampling ratio of 16 for a bandwidth of 2MHz. Finally, a 2-2-2 cascaded MASH architecture with 4-bit in the last stage is proposed to achieve a peak SNDR of 58dB for WLAN for a bandwidth of 20MHz. The novelty lies in the fact that unused blocks of second and third stages can be made inactive to achieve low power consumption. The modulator is designed in TSMC 0.18um CMOS technology and operates at 1.8 supply voltage

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Over-sampling sigma-delta analogue-to-digital converters (ADCs) are one of the key building blocks of state of the art wireless transceivers. In the sigma-delta modulator design the scaling coefficients determine the overall signal-to-noise ratio. Therefore, selecting the optimum value of the coefficient is very important. To this end, this paper addresses the design of a fourthorder multi-bit sigma-delta modulator for Wireless Local Area Networks (WLAN) receiver with feed-forward path and the optimum coefficients are selected using genetic algorithm (GA)- based search method. In particular, the proposed converter makes use of low-distortion swing suppression SDM architecture which is highly suitable for low oversampling ratios to attain high linearity over a wide bandwidth. The focus of this paper is the identification of the best coefficients suitable for the proposed topology as well as the optimization of a set of system parameters in order to achieve the desired signal-to-noise ratio. GA-based search engine is a stochastic search method which can find the optimum solution within the given constraints.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This contribution proposes a powerful technique for two-class imbalanced classification problems by combining the synthetic minority over-sampling technique (SMOTE) and the particle swarm optimisation (PSO) aided radial basis function (RBF) classifier. In order to enhance the significance of the small and specific region belonging to the positive class in the decision region, the SMOTE is applied to generate synthetic instances for the positive class to balance the training data set. Based on the over-sampled training data, the RBF classifier is constructed by applying the orthogonal forward selection procedure, in which the classifier's structure and the parameters of RBF kernels are determined using a PSO algorithm based on the criterion of minimising the leave-one-out misclassification rate. The experimental results obtained on a simulated imbalanced data set and three real imbalanced data sets are presented to demonstrate the effectiveness of our proposed algorithm.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Vehicular networks ensure that the information received from any vehicle is promptly and correctly propagated to nearby vehicles, to prevent accidents. A crucial point is how to trust the information transmitted, when the neighboring vehicles are rapidly changing and moving in and out of range. Current trust management schemes for vehicular networks establish trust by voting on the decision received by several nodes, which might not be required for practical scenarios. It might just be enough to check the validity of incoming information. Due to the ephemeral nature of vehicular networks, reputation schemes for mobile ad hoc networks (MANETs) cannot be applied to vehicular ad hoc networks (VANET). We point out several limitations of trust management schemes for VANET. In particular, we identify the problem of information cascading and oversampling, which commonly arise in social networks. Oversampling is a situation in which a node observing two or more nodes, takes into consideration both their opinions equally without knowing that they might have influenced each other in decision making. We show that simple voting for decision making, leads to oversampling and gives incorrect results. We propose an algorithm to overcome this problem in VANET. This is the first paper which discusses the concept of cascading effect and oversampling effects to ad hoc networks. © 2011 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this thesis, we develop an adaptive framework for Monte Carlo rendering, and more specifically for Monte Carlo Path Tracing (MCPT) and its derivatives. MCPT is attractive because it can handle a wide variety of light transport effects, such as depth of field, motion blur, indirect illumination, participating media, and others, in an elegant and unified framework. However, MCPT is a sampling-based approach, and is only guaranteed to converge in the limit, as the sampling rate grows to infinity. At finite sampling rates, MCPT renderings are often plagued by noise artifacts that can be visually distracting. The adaptive framework developed in this thesis leverages two core strategies to address noise artifacts in renderings: adaptive sampling and adaptive reconstruction. Adaptive sampling consists in increasing the sampling rate on a per pixel basis, to ensure that each pixel value is below a predefined error threshold. Adaptive reconstruction leverages the available samples on a per pixel basis, in an attempt to have an optimal trade-off between minimizing the residual noise artifacts and preserving the edges in the image. In our framework, we greedily minimize the relative Mean Squared Error (rMSE) of the rendering by iterating over sampling and reconstruction steps. Given an initial set of samples, the reconstruction step aims at producing the rendering with the lowest rMSE on a per pixel basis, and the next sampling step then further reduces the rMSE by distributing additional samples according to the magnitude of the residual rMSE of the reconstruction. This iterative approach tightly couples the adaptive sampling and adaptive reconstruction strategies, by ensuring that we only sample densely regions of the image where adaptive reconstruction cannot properly resolve the noise. In a first implementation of our framework, we demonstrate the usefulness of our greedy error minimization using a simple reconstruction scheme leveraging a filterbank of isotropic Gaussian filters. In a second implementation, we integrate a powerful edge aware filter that can adapt to the anisotropy of the image. Finally, in a third implementation, we leverage auxiliary feature buffers that encode scene information (such as surface normals, position, or texture), to improve the robustness of the reconstruction in the presence of strong noise.