924 resultados para probabilistic Hough transform
Resumo:
A near-bottom geological and geophysical survey was conducted at the western intersection of the Siqueiros Transform Fault and the East Pacific Rise. Transform-fault shear appears to distort the east flank of the rise crest in an area north of the fracture zone. In ward-facing scarps trend 335° and do not parallel the regional axis of spreading. Small-scale scarps reveal a hummocky bathymetry. The center of spreading is not a central peak but rather a 20-40 m deep, 1 km wide valley superimposed upon an 8 km wide ridge-crest horst. Small-scale topography indicates widespread volcanic flows within the valley. Two 0.75 km wide blocks flank the central valley. Fault scarps are more dominant on the western flank. Their alignment shifts from directions intermediate to parallel to the regional axis of spreading (355°). A median ridge within the fracture zone has a fault-block topography similar to that of the East Pacific Rise to the north. Dominant eastward-facing scarps trending 335° are on the west flank. A central depression, 1 km wide and 30 m deep, separates the dominantly fault-block regime of the west from the smoother topography of the east flank. This ridge originated by uplift due to faulting as well as by volcanism. Detailed mapping was concentrated in a perched basin (Dante's Hole) at the intersection of the rise crest and the fracture zone. Structural features suggest that Dante's Hole is an area subject to extreme shear and tensional drag resulting from transition between non-rigid and rigid crustal behavior. Normal E-W crustal spreading is probably taking place well within the northern confines of the basin. Possible residual spreading of this isolated rise crest coupled with shear drag within the transform fault could explain the structural isolation of Dante's Hole from the remainder of the Siqueiros Transform Fault.
Resumo:
Peer reviewed
Resumo:
Peer reviewed
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
In this work, we introduce the periodic nonlinear Fourier transform (PNFT) method as an alternative and efficacious tool for compensation of the nonlinear transmission effects in optical fiber links. In the Part I, we introduce the algorithmic platform of the technique, describing in details the direct and inverse PNFT operations, also known as the inverse scattering transform for periodic (in time variable) nonlinear Schrödinger equation (NLSE). We pay a special attention to explaining the potential advantages of the PNFT-based processing over the previously studied nonlinear Fourier transform (NFT) based methods. Further, we elucidate the issue of the numerical PNFT computation: we compare the performance of four known numerical methods applicable for the calculation of nonlinear spectral data (the direct PNFT), in particular, taking the main spectrum (utilized further in Part II for the modulation and transmission) associated with some simple example waveforms as the quality indicator for each method. We show that the Ablowitz-Ladik discretization approach for the direct PNFT provides the best performance in terms of the accuracy and computational time consumption.
Resumo:
In this paper we propose the design of communication systems based on using periodic nonlinear Fourier transform (PNFT), following the introduction of the method in the Part I. We show that the famous "eigenvalue communication" idea [A. Hasegawa and T. Nyu, J. Lightwave Technol. 11, 395 (1993)] can also be generalized for the PNFT application: In this case, the main spectrum attributed to the PNFT signal decomposition remains constant with the propagation down the optical fiber link. Therefore, the main PNFT spectrum can be encoded with data in the same way as soliton eigenvalues in the original proposal. The results are presented in terms of the bit-error rate (BER) values for different modulation techniques and different constellation sizes vs. the propagation distance, showing a good potential of the technique.
Resumo:
What is the maximum rate at which information can be transmitted error-free in fibre-optic communication systems? For linear channels, this was established in classic works of Nyquist and Shannon. However, despite the immense practical importance of fibre-optic communications providing for >99% of global data traffic, the channel capacity of optical links remains unknown due to the complexity introduced by fibre nonlinearity. Recently, there has been a flurry of studies examining an expected cap that nonlinearity puts on the information-carrying capacity of fibre-optic systems. Mastering the nonlinear channels requires paradigm shift from current modulation, coding and transmission techniques originally developed for linear communication systems. Here we demonstrate that using the integrability of the master model and the nonlinear Fourier transform, the lower bound on the capacity per symbol can be estimated as 10.7 bits per symbol with 500 GHz bandwidth over 2,000 km.
Resumo:
The work presented in this dissertation is focused on applying engineering methods to develop and explore probabilistic survival models for the prediction of decompression sickness in US NAVY divers. Mathematical modeling, computational model development, and numerical optimization techniques were employed to formulate and evaluate the predictive quality of models fitted to empirical data. In Chapters 1 and 2 we present general background information relevant to the development of probabilistic models applied to predicting the incidence of decompression sickness. The remainder of the dissertation introduces techniques developed in an effort to improve the predictive quality of probabilistic decompression models and to reduce the difficulty of model parameter optimization.
The first project explored seventeen variations of the hazard function using a well-perfused parallel compartment model. Models were parametrically optimized using the maximum likelihood technique. Model performance was evaluated using both classical statistical methods and model selection techniques based on information theory. Optimized model parameters were overall similar to those of previously published Results indicated that a novel hazard function definition that included both ambient pressure scaling and individually fitted compartment exponent scaling terms.
We developed ten pharmacokinetic compartmental models that included explicit delay mechanics to determine if predictive quality could be improved through the inclusion of material transfer lags. A fitted discrete delay parameter augmented the inflow to the compartment systems from the environment. Based on the observation that symptoms are often reported after risk accumulation begins for many of our models, we hypothesized that the inclusion of delays might improve correlation between the model predictions and observed data. Model selection techniques identified two models as having the best overall performance, but comparison to the best performing model without delay and model selection using our best identified no delay pharmacokinetic model both indicated that the delay mechanism was not statistically justified and did not substantially improve model predictions.
Our final investigation explored parameter bounding techniques to identify parameter regions for which statistical model failure will not occur. When a model predicts a no probability of a diver experiencing decompression sickness for an exposure that is known to produce symptoms, statistical model failure occurs. Using a metric related to the instantaneous risk, we successfully identify regions where model failure will not occur and identify the boundaries of the region using a root bounding technique. Several models are used to demonstrate the techniques, which may be employed to reduce the difficulty of model optimization for future investigations.
Resumo:
Un document accompagne le mémoire et est disponible pour consultation au Centre de conservation des bibliothèques de l'Université de Montréal (http://www.bib.umontreal.ca/conservation/).
Resumo:
This paper proposes a JPEG-2000 compliant architecture capable of computing the 2 -D Inverse Discrete Wavelet Transform. The proposed architecture uses a single processor and a row-based schedule to minimize control and routing complexity and to ensure that processor utilization is kept at 100%. The design incorporates the handling of borders through the use of symmetric extension. The architecture has been implemented on the Xilinx Virtex2 FPGA.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
In Germany the upscaling algorithm is currently the standard approach for evaluating the PV power produced in a region. This method involves spatially interpolating the normalized power of a set of reference PV plants to estimate the power production by another set of unknown plants. As little information on the performances of this method could be found in the literature, the first goal of this thesis is to conduct an analysis of the uncertainty associated to this method. It was found that this method can lead to large errors when the set of reference plants has different characteristics or weather conditions than the set of unknown plants and when the set of reference plants is small. Based on these preliminary findings, an alternative method is proposed for calculating the aggregate power production of a set of PV plants. A probabilistic approach has been chosen by which a power production is calculated at each PV plant from corresponding weather data. The probabilistic approach consists of evaluating the power for each frequently occurring value of the parameters and estimating the most probable value by averaging these power values weighted by their frequency of occurrence. Most frequent parameter sets (e.g. module azimuth and tilt angle) and their frequency of occurrence have been assessed on the basis of a statistical analysis of parameters of approx. 35 000 PV plants. It has been found that the plant parameters are statistically dependent on the size and location of the PV plants. Accordingly, separate statistical values have been assessed for 14 classes of nominal capacity and 95 regions in Germany (two-digit zip-code areas). The performances of the upscaling and probabilistic approaches have been compared on the basis of 15 min power measurements from 715 PV plants provided by the German distribution system operator LEW Verteilnetz. It was found that the error of the probabilistic method is smaller than that of the upscaling method when the number of reference plants is sufficiently large (>100 reference plants in the case study considered in this chapter). When the number of reference plants is limited (<50 reference plants for the considered case study), it was found that the proposed approach provides a noticeable gain in accuracy with respect to the upscaling method.
Resumo:
Questo elaborato si propone di descrivere una delle tecniche maggiormente usate dalla visione artificiale per la rilevazione di forme specifiche presenti in un'immagine, attraverso il confronto con diversi metodi ad essa simili: la trasformata di Hough.