869 resultados para Probabilistic Algorithms
Resumo:
This study aimed to describe the probabilistic structure of the annual series of extreme daily rainfall (Preabs), available from the weather station of Ubatuba, State of São Paulo, Brazil (1935-2009), by using the general distribution of extreme value (GEV). The autocorrelation function, the Mann-Kendall test, and the wavelet analysis were used in order to evaluate the presence of serial correlations, trends, and periodical components. Considering the results obtained using these three statistical methods, it was possible to assume the hypothesis that this temporal series is free from persistence, trends, and periodicals components. Based on quantitative and qualitative adhesion tests, it was found that the GEV may be used in order to quantify the probabilities of the Preabs data. The best results of GEV were obtained when the parameters of this function were estimated using the method of maximum likelihood. The method of L-moments has also shown satisfactory results.
Resumo:
Tässä diplomityössä tehtiin Olkiluodon ydinvoimalaitoksella sijaitsevan käytetyn ydinpolttoaineen allasvarastointiin perustuvan välivaraston todennäköisyysperustainen ulkoisten uhkien riskianalyysi. Todennäköisyysperustainen riskianalyysi (PRA) on yleisesti käytetty riskien tunnistus- ja lähestymistapa ydinvoimalaitoksella. Työn tarkoituksena oli laatia täysin uusi ulkoisten uhkien PRA-analyysi, koska Suomessa ei ole aiemmin tehty vastaavanlaisia tämän tutkimusalueen riskitarkasteluja. Riskitarkastelun motiivina ovat myös maailmalla tapahtuneiden luonnonkatastrofien vuoksi korostunut ulkoisten uhkien rooli käytetyn ydinpolttoaineen välivarastoinnin turvallisuudessa. PRA analyysin rakenne pohjautui tutkimuksen alussa luotuun metodologiaan. Analyysi perustuu mahdollisten ulkoisten uhkien tunnistamiseen pois lukien ihmisen aikaansaamat tahalliset vahingot. Tunnistettujen ulkoisten uhkien esiintymistaajuuksien ja vahingoittamispotentiaalin perusteella ulkoiset uhat joko karsittiin pois tutkimuksessa määriteltyjen karsintakriteerien avulla tai analysoitiin tarkemmin. Tutkimustulosten perusteella voitiin todeta, että tiedot hyvin harvoin tapahtuvista ulkoisista uhista ovat epätäydellisiä. Suurinta osaa näistä hyvin harvoin tapahtuvista ulkoisista uhista ei ole koskaan esiintynyt eikä todennäköisesti koskaan tule esiintymään Olkiluodon vaikutusalueella tai edes Suomessa. Esimerkiksi salaman iskujen ja öljyaltistuksen roolit ja vaikutukset erilaisten komponenttien käytettävyyteen ovat epävarmasti tunnettuja. Tutkimuksen tuloksia voidaan pitää kokonaisuudessaan merkittävinä, koska niiden perusteella voidaan osoittaa ne ulkoiset uhat, joiden vaikutuksia olisi syytä tutkia tarkemmin. Yksityiskohtaisempi tietoisuus hyvin harvoin esiintyvistä ulkoisista uhista tarkentaisi alkutapahtumataajuuksien estimaatteja.
Resumo:
Modeller för intermolekulär växelvärkan utnyttjas brett inom biologin. Analys av kontakter mellan proteiner och läkemedelsforskning representerar typiska tillämpningsområden för dylika modeller. En modell som beskriver sådana molekylära växelverkningar kan utformas med hjälp av biofysisk teori, vilket tenderar att resultera i ytterst tung beräkningsbörda även för enkla tillämpningar. Ett alternativt sätt att formulera modeller är att utnyttja stora databaser som innehåller strukturmätningar gjorda med hjälp av till exempel röntgendiffraktion. Då man använder sig av empiriska mätdata direkt, möjliggör en statistisk modell att osäkerheten och inexaktheten i datat tas till hänsyn på ett adekvat sätt, samtidigt som beräkningsbördan håller sig på en rimligare nivå jämfört med kvantmekaniska metoder som i princip borde ge de optimala resultaten. I avhandlingen utvecklades en 3D modell för numerisk undersökning av intermolekulär växelverkan baserad på Bayesiansk statistik. Modellens syfte är att åstadkomma prognoser för det hurdana eller vilka molekylstrukturer prefereras i en given kontext, d.v.s. är mer sannolika inom ramen för interaktion. Modellen testades i essentiella molekyläromgivningar - en liten molekyl vid sin bindningsplats hos ett protein och en gränsyta mellan proteinerna i ett komplex. De erhållna numeriska resultaten motsvarar väl experimentella resultat som tidigare rapporterats i litteraturen, exempelvis kvalitativa bindningsaffiniteter och kemisk kännedom av vissa aminosyrors rumsliga förmågor att utgöra bindningar. I avhandlingen gjordes ytterligare preliminära tester av den statistiska ansatsen för modellering av den centrala molekylära strukturella anpassningsbarheten. I praktiken är den utvecklade modellen ämnad som ett led i en mer omfattande analysmetod, så som en s.k. farmakofor modell. Molekyylivuorovaikutusten mallintamista hyödynnetään laajasti biologisten kysymysten tarkastelussa. Tyypillisiä esimerkkejä sovelluskohteista ovat proteiinien väliset kontaktit ja lääkesuunnittelu. Vuorovaikutuksia kuvaavan mallin lähtökohta voi olla molekyyleihin liittyvä teoria, jolloin soveltamiseen liittyvä laskenta saattaa olla erityisen raskasta, tai suuri havaintojoukko joka on saatu aikaan esimerkiksi mittaamalla rakenteita röntgendiffraktio menetelmällä. Tilastollinen malli mahdollistaa havaintoaineistossa olevan epätarkkuuden ja epävarmuuden huomioimisen, samalla pitäen laskennallisen kuorman pienempänä verrattuna periaatteessa parhaan tuloksen antavaan kvanttimekaaniseen mallinnukseen. Väitöstyössä kehitettiin bayesiläiseen tilastotieteeseen perustuva 3D malli molekyylien välisten vuorovaikutusten laskennalliseen tarkasteluun. Mallin tehtävä on tuottaa ennusteita sen suhteen, minkä tai millaisten molekyylirakenteiden väliset kompleksit ovat etusijalla, toisin sanoen todennäköisempiä, vuorovaikutustilanteessa. Työssä kehitetyn menetelmän toimivuutta testattiin käyttötarkoituksen suhteen olennaisissa molekyyliympäristöissä - pieni molekyyli sitoutumiskohdassaan proteiinissa sekä rajapinta kahden proteiinin välilllä proteiinikompleksissa. Saadut laskennalliset tulokset vastasivat hyvin vertailuun käytettyjä kirjallisuudesta saatuja kokeellisia tuloksia, kuten laadullisia sitoutumisaffiniteetteja, sekä kemiallista tietoa esimerkiksi tiettyjen aminohappojen avaruudellisesta sidoksenmuodostuksesta. Väitöstyössä myös alustavasti testattiin tilastollista lähestymistapaa tärkeän molekyylien rakenteellisen mukautuvuuden mallintamiseen. Käytännössä malli on tarkoitettu osaksi jotakin laajempaa analyysimenetelmää, kuten farmakoforimallia.
Resumo:
The recent emergence of low-cost RGB-D sensors has brought new opportunities for robotics by providing affordable devices that can provide synchronized images with both color and depth information. In this thesis, recent work on pose estimation utilizing RGBD sensors is reviewed. Also, a pose recognition system for rigid objects using RGB-D data is implemented. The implementation uses half-edge primitives extracted from the RGB-D images for pose estimation. The system is based on the probabilistic object representation framework by Detry et al., which utilizes Nonparametric Belief Propagation for pose inference. Experiments are performed on household objects to evaluate the performance and robustness of the system.
Resumo:
Global illumination algorithms are at the center of realistic image synthesis and account for non-trivial light transport and occlusion within scenes, such as indirect illumination, ambient occlusion, and environment lighting. Their computationally most difficult part is determining light source visibility at each visible scene point. Height fields, on the other hand, constitute an important special case of geometry and are mainly used to describe certain types of objects such as terrains and to map detailed geometry onto object surfaces. The geometry of an entire scene can also be approximated by treating the distance values of its camera projection as a screen-space height field. In order to shadow height fields from environment lights a horizon map is usually used to occlude incident light. We reduce the per-receiver time complexity of generating the horizon map on N N height fields from O(N) of the previous work to O(1) by using an algorithm that incrementally traverses the height field and reuses the information already gathered along the path of traversal. We also propose an accurate method to integrate the incident light within the limits given by the horizon map. Indirect illumination in height fields requires information about which other points are visible to each height field point. We present an algorithm to determine this intervisibility in a time complexity that matches the space complexity of the produced visibility information, which is in contrast to previous methods which scale in the height field size. As a result the amount of computation is reduced by two orders of magnitude in common use cases. Screen-space ambient obscurance methods approximate ambient obscurance from the depth bu er geometry and have been widely adopted by contemporary real-time applications. They work by sampling the screen-space geometry around each receiver point but have been previously limited to near- field effects because sampling a large radius quickly exceeds the render time budget. We present an algorithm that reduces the quadratic per-pixel complexity of previous methods to a linear complexity by line sweeping over the depth bu er and maintaining an internal representation of the processed geometry from which occluders can be efficiently queried. Another algorithm is presented to determine ambient obscurance from the entire depth bu er at each screen pixel. The algorithm scans the depth bu er in a quick pre-pass and locates important features in it, which are then used to evaluate the ambient obscurance integral accurately. We also propose an evaluation of the integral such that results within a few percent of the ray traced screen-space reference are obtained at real-time render times.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
We compared the cost-benefit of two algorithms, recently proposed by the Centers for Disease Control and Prevention, USA, with the conventional one, the most appropriate for the diagnosis of hepatitis C virus (HCV) infection in the Brazilian population. Serum samples were obtained from 517 ELISA-positive or -inconclusive blood donors who had returned to Fundação Pró-Sangue/Hemocentro de São Paulo to confirm previous results. Algorithm A was based on signal-to-cut-off (s/co) ratio of ELISA anti-HCV samples that show s/co ratio ³95% concordance with immunoblot (IB) positivity. For algorithm B, reflex nucleic acid amplification testing by PCR was required for ELISA-positive or -inconclusive samples and IB for PCR-negative samples. For algorithm C, all positive or inconclusive ELISA samples were submitted to IB. We observed a similar rate of positive results with the three algorithms: 287, 287, and 285 for A, B, and C, respectively, and 283 were concordant with one another. Indeterminate results from algorithms A and C were elucidated by PCR (expanded algorithm) which detected two more positive samples. The estimated cost of algorithms A and B was US$21,299.39 and US$32,397.40, respectively, which were 43.5 and 14.0% more economic than C (US$37,673.79). The cost can vary according to the technique used. We conclude that both algorithms A and B are suitable for diagnosing HCV infection in the Brazilian population. Furthermore, algorithm A is the more practical and economical one since it requires supplemental tests for only 54% of the samples. Algorithm B provides early information about the presence of viremia.
Resumo:
Our objective is to evaluate the accuracy of three algorithms in differentiating the origins of outflow tract ventricular arrhythmias (OTVAs). This study involved 110 consecutive patients with OTVAs for whom a standard 12-lead surface electrocardiogram (ECG) showed typical left bundle branch block morphology with an inferior axis. All the ECG tracings were retrospectively analyzed using the following three recently published ECG algorithms: 1) the transitional zone (TZ) index, 2) the V2 transition ratio, and 3) V2 R wave duration and R/S wave amplitude indices. Considering all patients, the V2 transition ratio had the highest sensitivity (92.3%), while the R wave duration and R/S wave amplitude indices in V2 had the highest specificity (93.9%). The latter finding had a maximal area under the ROC curve of 0.925. In patients with left ventricular (LV) rotation, the V2 transition ratio had the highest sensitivity (94.1%), while the R wave duration and R/S wave amplitude indices in V2 had the highest specificity (87.5%). The former finding had a maximal area under the ROC curve of 0.892. All three published ECG algorithms are effective in differentiating the origin of OTVAs, while the V2 transition ratio, and the V2 R wave duration and R/S wave amplitude indices are the most sensitive and specific algorithms, respectively. Amongst all of the patients, the V2 R wave duration and R/S wave amplitude algorithm had the maximal area under the ROC curve, but in patients with LV rotation the V2 transition ratio algorithm had the maximum area under the ROC curve.
Resumo:
Many industrial applications need object recognition and tracking capabilities. The algorithms developed for those purposes are computationally expensive. Yet ,real time performance, high accuracy and small power consumption are essential measures of the system. When all these requirements are combined, hardware acceleration of these algorithms becomes a feasible solution. The purpose of this study is to analyze the current state of these hardware acceleration solutions, which algorithms have been implemented in hardware and what modifications have been done in order to adapt these algorithms to hardware.
Resumo:
Simplification of highly detailed CAD models is an important step when CAD models are visualized or by other means utilized in augmented reality applications. Without simplification, CAD models may cause severe processing and storage is- sues especially in mobile devices. In addition, simplified models may have other advantages like better visual clarity or improved reliability when used for visual pose tracking. The geometry of CAD models is invariably presented in form of a 3D mesh. In this paper, we survey mesh simplification algorithms in general and focus especially to algorithms that can be used to simplify CAD models. We test some commonly known algorithms with real world CAD data and characterize some new CAD related simplification algorithms that have not been surveyed in previous mesh simplification reviews.
Resumo:
The increasing performance of computers has made it possible to solve algorithmically problems for which manual and possibly inaccurate methods have been previously used. Nevertheless, one must still pay attention to the performance of an algorithm if huge datasets are used or if the problem iscomputationally difficult. Two geographic problems are studied in the articles included in this thesis. In the first problem the goal is to determine distances from points, called study points, to shorelines in predefined directions. Together with other in-formation, mainly related to wind, these distances can be used to estimate wave exposure at different areas. In the second problem the input consists of a set of sites where water quality observations have been made and of the results of the measurements at the different sites. The goal is to select a subset of the observational sites in such a manner that water quality is still measured in a sufficient accuracy when monitoring at the other sites is stopped to reduce economic cost. Most of the thesis concentrates on the first problem, known as the fetch length problem. The main challenge is that the two-dimensional map is represented as a set of polygons with millions of vertices in total and the distances may also be computed for millions of study points in several directions. Efficient algorithms are developed for the problem, one of them approximate and the others exact except for rounding errors. The solutions also differ in that three of them are targeted for serial operation or for a small number of CPU cores whereas one, together with its further developments, is suitable also for parallel machines such as GPUs.
Resumo:
This research attempted to address the question of the role of explicit algorithms and episodic contexts in the acquisition of computational procedures for regrouping in subtraction. Three groups of students having difficulty learning to subtract with regrouping were taught procedures for doing so through either an explicit algorithm, an episodic content or an examples approach. It was hypothesized that the use of an explicit algorithm represented in a flow chart format would facilitate the acquisition and retention of specific procedural steps relative to the other two conditions. On the other hand, the use of paragraph stories to create episodic content was expected to facilitate the retrieval of algorithms, particularly in a mixed presentation format. The subjects were tested on similar, near, and far transfer questions over a four-day period. Near and far transfer algorithms were also introduced on Day Two. The results suggested that both explicit and episodic context facilitate performance on questions requiring subtraction with regrouping. However, the differential effects of these two approaches on near and far transfer questions were not as easy to identify. Explicit algorithms may facilitate the acquisition of specific procedural steps while at the same time inhibiting the application of such steps to transfer questions. Similarly, the value of episodic context in cuing the retrieval of an algorithm may be limited by the ability of a subject to identify and classify a new question as an exemplar of a particular episodically deflned problem type or category. The implications of these findings in relation to the procedures employed in the teaching of Mathematics to students with learning problems are discussed in detail.