977 resultados para Computation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this work is to apply approximate Bayesian computation in combination with Marcov chain Monte Carlo methods in order to estimate the parameters of tuberculosis transmission. The methods are applied to San Francisco data and the results are compared with the outcomes of previous works. Moreover, a methodological idea with the aim to reduce computational time is also described. Despite the fact that this approach is proved to work in an appropriate way, further analysis is needed to understand and test its behaviour in different cases. Some related suggestions to its further enhancement are described in the corresponding chapter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nineteen-channel EEGs were recorded from the scalp surface of 30 healthy subjects (16 males and 14 females, mean age: 34 years, SD: 11.7 years) at rest and under trains of intermittent photic stimulation (IPS) at rates of 5, 10 and 20 Hz. Digitalized data were submitted to spectral analysis with fast fourier transformation providing the basis for the computation of global field power (GFP). For quantification, GFP values in the frequency ranges of 5, 10 and 20 Hz at rest were divided by the corresponding data obtained under IPS. All subjects showed a photic driving effect at each rate of stimulation. GFP data were normally distributed, whereas ratios from photic driving effect data showed no uniform behavior due to high interindividual variability. Suppression of alpha-power after IPS with 10 Hz was observed in about 70% of the volunteers. In contrast, ratios of alpha-power were unequivocal in all subjects: IPS at 20 Hz always led to a suppression of alpha-power. Dividing alpha-GFP with 20-Hz IPS by alpha-GFP at rest (R = alpha-GFP IPS/alpha-GFPrest) thus resulted in ratios lower than 1. We conclude that ratios from GFP data with 20-Hz IPS may provide a suitable paradigm for further investigations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents a novel design paradigm, called Virtual Runtime Application Partitions (VRAP), to judiciously utilize the on-chip resources. As the dark silicon era approaches, where the power considerations will allow only a fraction chip to be powered on, judicious resource management will become a key consideration in future designs. Most of the works on resource management treat only the physical components (i.e. computation, communication, and memory blocks) as resources and manipulate the component to application mapping to optimize various parameters (e.g. energy efficiency). To further enhance the optimization potential, in addition to the physical resources we propose to manipulate abstract resources (i.e. voltage/frequency operating point, the fault-tolerance strength, the degree of parallelism, and the configuration architecture). The proposed framework (i.e. VRAP) encapsulates methods, algorithms, and hardware blocks to provide each application with the abstract resources tailored to its needs. To test the efficacy of this concept, we have developed three distinct self adaptive environments: (i) Private Operating Environment (POE), (ii) Private Reliability Environment (PRE), and (iii) Private Configuration Environment (PCE) that collectively ensure that each application meets its deadlines using minimal platform resources. In this work several novel architectural enhancements, algorithms and policies are presented to realize the virtual runtime application partitions efficiently. Considering the future design trends, we have chosen Coarse Grained Reconfigurable Architectures (CGRAs) and Network on Chips (NoCs) to test the feasibility of our approach. Specifically, we have chosen Dynamically Reconfigurable Resource Array (DRRA) and McNoC as the representative CGRA and NoC platforms. The proposed techniques are compared and evaluated using a variety of quantitative experiments. Synthesis and simulation results demonstrate VRAP significantly enhances the energy and power efficiency compared to state of the art.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quantum computation and quantum communication are two of the most promising future applications of quantum mechanics. Since the information carriers used in both of them are essentially open quantum systems it is necessary to understand both quantum information theory and the theory of open quantum systems in order to investigate realistic implementations of such quantum technologies. In this thesis we consider the theory of open quantum systems from a quantum information theory perspective. The thesis is divided into two parts: review of the literature and original research. In the review of literature we present some important definitions and known results of open quantum systems and quantum information theory. We present the definitions of trace distance, two channel capacities and superdense coding capacity and give a reasoning why they can be used to represent the transmission efficiency of a communication channel. We also show derivations of some properties useful to link completely positive and trace preserving maps to trace distance and channel capacities. With the help of these properties we construct three measures of non-Markovianity and explain why they detect non-Markovianity. In the original research part of the thesis we study the non-Markovian dynamics in an experimentally realized quantum optical set-up. For general one-qubit dephasing channels we calculate the explicit forms of the two channel capacities and the superdense coding capacity. For the general two-qubit dephasing channel with uncorrelated local noises we calculate the explicit forms of the quantum capacity and the mutual information of a four-letter encoding. By using the dynamics in the experimental implementation as a set of specific dephasing channels we also calculate and compare the measures in one- and two-qubit dephasing channels and study the options of manipulating the environment to achieve revivals and higher transmission rates in superdense coding protocol with dephasing noise. Kvanttilaskenta ja kvanttikommunikaatio ovat kaksi puhutuimmista tulevaisuuden kvanttimekaniikan käytännön sovelluksista. Koska molemmissa näistä informaatio koodataan systeemeihin, jotka ovat oleellisesti avoimia kvanttisysteemejä, sekä kvantti-informaatioteorian, että avointen kvanttisysteemien tuntemus on välttämätöntä. Tässä tutkielmassa käsittelemme avointen kvanttisysteemien teoriaa kvantti-informaatioteorian näkökulmasta. Tutkielma on jaettu kahteen osioon: kirjallisuuskatsaukseen ja omaan tutkimukseen. Kirjallisuuskatsauksessa esitämme joitakin avointen kvanttisysteemien ja kvantti-informaatioteorian tärkeitä määritelmiä ja tunnettuja tuloksia. Esitämme jälkietäisyyden, kahden kanavakapasiteetin ja superdense coding -kapasiteetin määritelmät ja esitämme perustelun sille, miksi niitä voidaan käyttää kuvaamaan kommunikointikanavan lähetystehokkuutta. Näytämme myös todistukset kahdelle ominaisuudelle, jotka liittävät täyspositiiviset ja jäljensäilyttävät kuvaukset jälkietäisyyteen ja kanavakapasiteetteihin. Näiden ominaisuuksien avulla konstruoimme kolme epä-Markovisuusmittaa ja perustelemme, miksi ne havaitsevat dynamiikan epä-Markovisuutta. Oman tutkimuksen osiossa tutkimme epä-Markovista dynamiikkaa kokeellisesti toteutetussa kvanttioptisessa mittausjärjestelyssä. Yleisen yhden qubitin dephasing-kanavan tapauksessa laskemme molempien kanavakapasiteettien ja superdense coding -kapasiteetin eksplisiittiset muodot. Yleisen kahden qubitin korreloimattomien ympäristöjen dephasing-kanavan tapauksessa laskemme yhteisen informaation lausekkeen nelikirjaimisessa koodauksessa ja kvanttikanavakapasiteetin. Käyttämällä kokeellisen mittajärjestelyn dynamiikkoja esimerkki dephasing-kanavina me myös laskemme konstruoitujen epä-Markovisuusmittojen arvot ja vertailemme niitä yksi- ja kaksi-qubitti-dephasing-kanavissa. Lisäksi käyttäen kokeellisia esimerkkikanavia tutkimme, kuinka ympäristöä manipuloimalla superdense coding –skeemassa voidaan saada yhteinen informaatio ajoittain kasvamaan tai saavuttaa kaikenkaikkiaan korkeampi lähetystehokkuus.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this thesis is to study the scalability of small break LOCA experiments. The study is performed on the experimental data, as well as on the results of thermal hydraulic computation performed on TRACE code. The SBLOCA experiments were performed on PACTEL facility situated at LUT. The temporal scaling of the results was done by relating the total coolant mass in the system with the initial break mass flow and using the quotient to scale the experiment time. The results showed many similarities in the behaviour of pressure and break mass flow between the experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is much evidence to support an age-related decline in source memory ability. However, the underlying mechanisms responsible for this decline are not well understood. The current study was carried out to determine the electrophysiological correlates of source memory discrimination in younger and older adults. Event-related potentials (ERPs) and continuous electrocardiographic (ECG) data were collected from younger (M= 21 years) and older (M= 71 years) adults during a source memory task. Older adults were more likely to make source memory errors for recently repeated, non-target words than were younger adults. Moreover, their ERP records for correct trials showed an increased amplitude in the late positive (LP) component (400-800 msec) for the most recently presented, non-target stimuli relative to the LP noted for target items. Younger adults showed an opposite pattern, with a large LP component for target items, and a much smaller LP component for the recently repeated non-target items. Computation of parasympathetic activity in the vagus nerve was performed on the ECG data (Porges, 1985). The resulting measure, vagal tone, was used as an index of physiological responsivity. The vagal tone index of physiological responsivity was negatively related to the LP amplitude for the most recently repeated, non-target words in both groups, after accounting for age effects. The ERP data support the hypothesis that the tendency to make source memory errors on the part of older adults is related to the ability to selectively control attentional processes during task performance. Furthermore, the relationship between vagal tone and ERP reactivity suggests that there is a physiological basis to the heightened reactivity measured in the LP response to recently repeated non-target items such that, under decreased physiological resources, there is an impairment in the ability to selectively inhibit bottom-up, stimulus based properties in favour of task-related goals in older adults. The inconsistency of these results with other explanatory models of source memory deficits is discussed. It is concluded that the data are consistent with a physiological reactivity model requiring inhibition of reactivity to irrelevant, but perceptually-fluent, stimuli.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Relationships between surface sediment diatom assemblages and lake trophic status were studied in 50 Canadian Precambrian Shield lakes in the Muskoka-Haliburton and southern Ontario regions. The purpose of this study was to develop mathematical regression models to infer lake trophic status from diatom assemblage data. To achieve this goal, however, additional investigations dealing with the evaluation of lake trophic status and the autecological features of key diatom species were carried out. Because a unifying index and classification for lake trophic status was not available, a new multiple index was developed in this study, by the computation of the physical, chemical and biological data from 85 south Ontario lakes. By using the new trophic parameter, the lake trophic level (TL) was determined: TL = 1.37 In[1 +(TP x Chl-a / SD)], where, TP=total phosphorus, Chl-a=chlorophyll-a and SD=Secchi depth. The boundaries between 7 lake trophic categories (Ultra-oligotrophic lakes: 0-0.24; Oligotrophic lakes: 0.241-1.8; Oligomesotrophic lakes: 1.813.0; Mesotrophic lakes: 3.01-4.20; Mesoeutrophic lakes: 4.21-5.4; Eutrophic lakes: 5.41-10 and Hyper-eutrophic lakes: above 10) were established. The new trophic parameter was more convenient for management of water quality, communication to the public and comparison with other lake trophic status indices than many of the previously published indices because the TL index attempts to Increase understanding of the characteristics of lakes and their comprehensive trophic states. It is more reasonable and clear for a unifying determination of true trophic states of lakes. Diatom specIes autecology analysis was central to this thesis. However, the autecological relationship of diatom species and lake trophic status had not previously been well documented. Based on the investigation of the diatom composition and variety of species abundance in 30 study lakes, the distribution optima of diatom species were determined. These determinations were based on a quantitative method called "weighted average" (Charles 1985). On this basis, the diatom species were classified into five trophic categories (oligotrophic, oligomesotrophic, mesotrophic, mesoeutrophic and eutrophic species groups). The resulting diatom trophic status autecological features were used in the regressIon analysis between diatom assemblages and lake trophic status. When the TL trophic level values of the 30 lakes were regressed against their fi ve corresponding diatom trophic groups, the two mathematical equations for expressing the assumed linear relationship between the diatom assemblages composition were determined by (1) uSIng a single regression technique: Trophic level of lake (TL) = 2.643 - 7.575 log (Index D) (r = 0.88 r2 = 0.77 P = 0.0001; n = 30) Where, Index D = (0% + OM% + M%)/(E% + ME% + M%); 4 (2) uSIng a' multiple regressIon technique: TL=4.285-0.076 0%- 0.055 OM% - 0.026 M% + 0.033 ME% + 0.065 E% (r=0.89, r2=0.792, P=O.OOOl, n=30) There was a significant correlation between measured and diatom inferred trophic levels both by single and multiple regressIon methods (P < 0.0001, n=20), when both models were applied to another 20 test lakes. Their correlation coefficients (r2 ) were also statistically significant (r2 >0.68, n=20). As such, the two transfer function models between diatoms and lake trophic status were validated. The two models obtained as noted above were developed using one group of lakes and then tested using an entirely different group of lakes. This study indicated that diatom assemblages are sensitive to lake trophic status. As indicators of lake trophic status, diatoms are especially useful in situations where no local trophic information is available and in studies of the paleotrophic history of lakes. Diatom autecological information was used to develop a theory assessing water quality and lake trophic status.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The (n, k)-star interconnection network was proposed in 1995 as an attractive alternative to the n-star topology in parallel computation. The (n, k )-star has significant advantages over the n-star which itself was proposed as an attractive alternative to the popular hypercube. The major advantage of the (n, k )-star network is its scalability, which makes it more flexible than the n-star as an interconnection network. In this thesis, we will focus on finding graph theoretical properties of the (n, k )-star as well as developing parallel algorithms that run on this network. The basic topological properties of the (n, k )-star are first studied. These are useful since they can be used to develop efficient algorithms on this network. We then study the (n, k )-star network from algorithmic point of view. Specifically, we will investigate both fundamental and application algorithms for basic communication, prefix computation, and sorting, etc. A literature review of the state-of-the-art in relation to the (n, k )-star network as well as some open problems in this area are also provided.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The (n, k)-arrangement interconnection topology was first introduced in 1992. The (n, k )-arrangement graph is a class of generalized star graphs. Compared with the well known n-star, the (n, k )-arrangement graph is more flexible in degree and diameter. However, there are few algorithms designed for the (n, k)-arrangement graph up to present. In this thesis, we will focus on finding graph theoretical properties of the (n, k)- arrangement graph and developing parallel algorithms that run on this network. The topological properties of the arrangement graph are first studied. They include the cyclic properties. We then study the problems of communication: broadcasting and routing. Embedding problems are also studied later on. These are very useful to develop efficient algorithms on this network. We then study the (n, k )-arrangement network from the algorithmic point of view. Specifically, we will investigate both fundamental and application algorithms such as prefix sums computation, sorting, merging and basic geometry computation: finding convex hull on the (n, k )-arrangement graph. A literature review of the state-of-the-art in relation to the (n, k)-arrangement network is also provided, as well as some open problems in this area.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To investigate the thennal effects of latent heat in hydrothennal settings, an extension was made to the existing finite-element numerical modelling software, Aquarius. The latent heat algorithm was validated using a series of column models, which analysed the effects of penneability (flow rate), thennal gradient, and position along the two-phase curve (pressure). Increasing the flow rate and pressure increases displacement of the liquid-steam boundary from an initial position detennined without accounting for latent heat while increasing the thennal gradient decreases that displacement. Application to a regional scale model of a caldera-hosted hydrothennal system based on a representative suite of calderas (e.g., Yellowstone, Creede, Valles Grande) led to oscillations in the model solution. Oscillations can be reduced or eliminated by mesh refinement, which requires greater computation effort. Results indicate that latent heat should be accounted for to accurately model phase change conditions in hydrothennal settings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Layout planning is a process of sizing and placing rooms (e.g. in a house) while a t t empt ing to optimize various criteria. Often the r e are conflicting c r i t e r i a such as construction cost, minimizing the distance between r e l a t ed activities, and meeting the area requirements for these activities. The process of layout planning ha s mostly been done by hand, wi th a handful of a t t empt s to automa t e the process. Thi s thesis explores some of these pa s t a t t empt s and describes several new techniques for automa t ing the layout planning process using evolutionary computation. These techniques a r e inspired by the existing methods, while adding some of the i r own innovations. Additional experimenLs are done to t e s t the possibility of allowing polygonal exteriors wi th rectilinear interior walls. Several multi-objective approaches are used to evaluate and compare fitness. The evolutionary r epr e s ent a t ion and requirements specification used provide great flexibility in problem scope and depth and is worthy of considering in future layout and design a t t empt s . The system outlined in thi s thesis is capable of evolving a variety of floor plans conforming to functional and geometric specifications. Many of the resulting plans look reasonable even when compared to a professional floor plan. Additionally polygonal and multi-floor buildings were also generated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Three dimensional model design is a well-known and studied field, with numerous real-world applications. However, the manual construction of these models can often be time-consuming to the average user, despite the advantages o ffered through computational advances. This thesis presents an approach to the design of 3D structures using evolutionary computation and L-systems, which involves the automated production of such designs using a strict set of fitness functions. These functions focus on the geometric properties of the models produced, as well as their quantifiable aesthetic value - a topic which has not been widely investigated with respect to 3D models. New extensions to existing aesthetic measures are discussed and implemented in the presented system in order to produce designs which are visually pleasing. The system itself facilitates the construction of models requiring minimal user initialization and no user-based feedback throughout the evolutionary cycle. The genetic programming evolved models are shown to satisfy multiple criteria, conveying a relationship between their assigned aesthetic value and their perceived aesthetic value. Exploration into the applicability and e ffectiveness of a multi-objective approach to the problem is also presented, with a focus on both performance and visual results. Although subjective, these results o er insight into future applications and study in the fi eld of computational aesthetics and automated structure design.