902 resultados para Computational time
Resumo:
The thesis applies the ICC tecniques to the probabilistic polinomial complexity classes in order to get an implicit characterization of them. The main contribution lays on the implicit characterization of PP (which stands for Probabilistic Polynomial Time) class, showing a syntactical characterisation of PP and a static complexity analyser able to recognise if an imperative program computes in Probabilistic Polynomial Time. The thesis is divided in two parts. The first part focuses on solving the problem by creating a prototype of functional language (a probabilistic variation of lambda calculus with bounded recursion) that is sound and complete respect to Probabilistic Prolynomial Time. The second part, instead, reverses the problem and develops a feasible way to verify if a program, written with a prototype of imperative programming language, is running in Probabilistic polynomial time or not. This thesis would characterise itself as one of the first step for Implicit Computational Complexity over probabilistic classes. There are still open hard problem to investigate and try to solve. There are a lot of theoretical aspects strongly connected with these topics and I expect that in the future there will be wide attention to ICC and probabilistic classes.
Resumo:
The new generation of multicore processors opens new perspectives for the design of embedded systems. Multiprocessing, however, poses new challenges to the scheduling of real-time applications, in which the ever-increasing computational demands are constantly flanked by the need of meeting critical time constraints. Many research works have contributed to this field introducing new advanced scheduling algorithms. However, despite many of these works have solidly demonstrated their effectiveness, the actual support for multiprocessor real-time scheduling offered by current operating systems is still very limited. This dissertation deals with implementative aspects of real-time schedulers in modern embedded multiprocessor systems. The first contribution is represented by an open-source scheduling framework, which is capable of realizing complex multiprocessor scheduling policies, such as G-EDF, on conventional operating systems exploiting only their native scheduler from user-space. A set of experimental evaluations compare the proposed solution to other research projects that pursue the same goals by means of kernel modifications, highlighting comparable scheduling performances. The principles that underpin the operation of the framework, originally designed for symmetric multiprocessors, have been further extended first to asymmetric ones, which are subjected to major restrictions such as the lack of support for task migrations, and later to re-programmable hardware architectures (FPGAs). In the latter case, this work introduces a scheduling accelerator, which offloads most of the scheduling operations to the hardware and exhibits extremely low scheduling jitter. The realization of a portable scheduling framework presented many interesting software challenges. One of these has been represented by timekeeping. In this regard, a further contribution is represented by a novel data structure, called addressable binary heap (ABH). Such ABH, which is conceptually a pointer-based implementation of a binary heap, shows very interesting average and worst-case performances when addressing the problem of tick-less timekeeping of high-resolution timers.
Resumo:
Environmental computer models are deterministic models devoted to predict several environmental phenomena such as air pollution or meteorological events. Numerical model output is given in terms of averages over grid cells, usually at high spatial and temporal resolution. However, these outputs are often biased with unknown calibration and not equipped with any information about the associated uncertainty. Conversely, data collected at monitoring stations is more accurate since they essentially provide the true levels. Due the leading role played by numerical models, it now important to compare model output with observations. Statistical methods developed to combine numerical model output and station data are usually referred to as data fusion. In this work, we first combine ozone monitoring data with ozone predictions from the Eta-CMAQ air quality model in order to forecast real-time current 8-hour average ozone level defined as the average of the previous four hours, current hour, and predictions for the next three hours. We propose a Bayesian downscaler model based on first differences with a flexible coefficient structure and an efficient computational strategy to fit model parameters. Model validation for the eastern United States shows consequential improvement of our fully inferential approach compared with the current real-time forecasting system. Furthermore, we consider the introduction of temperature data from a weather forecast model into the downscaler, showing improved real-time ozone predictions. Finally, we introduce a hierarchical model to obtain spatially varying uncertainty associated with numerical model output. We show how we can learn about such uncertainty through suitable stochastic data fusion modeling using some external validation data. We illustrate our Bayesian model by providing the uncertainty map associated with a temperature output over the northeastern United States.
Resumo:
In this thesis we provide a characterization of probabilistic computation in itself, from a recursion-theoretical perspective, without reducing it to deterministic computation. More specifically, we show that probabilistic computable functions, i.e., those functions which are computed by Probabilistic Turing Machines (PTM), can be characterized by a natural generalization of Kleene's partial recursive functions which includes, among initial functions, one that returns identity or successor with probability 1/2. We then prove the equi-expressivity of the obtained algebra and the class of functions computed by PTMs. In the the second part of the thesis we investigate the relations existing between our recursion-theoretical framework and sub-recursive classes, in the spirit of Implicit Computational Complexity. More precisely, endowing predicative recurrence with a random base function is proved to lead to a characterization of polynomial-time computable probabilistic functions.
Resumo:
The aim of the work was to explore the practical applicability of molecular dynamics at different length and time scales. From nanoparticles system over colloids and polymers to biological systems like membranes and finally living cells, a broad range of materials was considered from a theoretical standpoint. In this dissertation five chemistry-related problem are addressed by means of theoretical and computational methods. The main results can be outlined as follows. (1) A systematic study of the effect of the concentration, chain length, and charge of surfactants on fullerene aggregation is presented. The long-discussed problem of the location of C60 in micelles was addressed and fullerenes were found in the hydrophobic region of the micelles. (2) The interactions between graphene sheet of increasing size and phospholipid membrane are quantitatively investigated. (3) A model was proposed to study structure, stability, and dynamics of MoS2, a material well-known for its tribological properties. The telescopic movement of nested nanotubes and the sliding of MoS2 layers is simulated. (4) A mathematical model to gain understaning of the coupled diffusion-swelling process in poly(lactic-co-glycolic acid), PLGA, was proposed. (5) A soft matter cell model is developed to explore the interaction of living cell with artificial surfaces. The effect of the surface properties on the adhesion dynamics of cells are discussed.
Resumo:
The assessment of historical structures is a significant need for the next generations, as historical monuments represent the community’s identity and have an important cultural value to society. Most of historical structures built by using masonry which is one of the oldest and most common construction materials used in the building sector since the ancient time. Also it is considered a complex material, as it is a composition of brick units and mortar, which affects the structural performance of the building by having different mechanical behaviour with respect to different geometry and qualities given by the components.
Resumo:
Neural dynamic processes correlated over several time scales are found in vivo, in stimulus-evoked as well as spontaneous activity, and are thought to affect the way sensory stimulation is processed. Despite their potential computational consequences, a systematic description of the presence of multiple time scales in single cortical neurons is lacking. In this study, we injected fast spiking and pyramidal (PYR) neurons in vitro with long-lasting episodes of step-like and noisy, in-vivo-like current. Several processes shaped the time course of the instantaneous spike frequency, which could be reduced to a small number (1-4) of phenomenological mechanisms, either reducing (adapting) or increasing (facilitating) the neuron's firing rate over time. The different adaptation/facilitation processes cover a wide range of time scales, ranging from initial adaptation (<10 ms, PYR neurons only), to fast adaptation (<300 ms), early facilitation (0.5-1 s, PYR only), and slow (or late) adaptation (order of seconds). These processes are characterized by broad distributions of their magnitudes and time constants across cells, showing that multiple time scales are at play in cortical neurons, even in response to stationary stimuli and in the presence of input fluctuations. These processes might be part of a cascade of processes responsible for the power-law behavior of adaptation observed in several preparations, and may have far-reaching computational consequences that have been recently described.
Resumo:
The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. We detail some of the design decisions, software paradigms and operational strategies that have allowed a small number of researchers to provide a wide variety of innovative, extensible, software solutions in a relatively short time. The use of an object oriented programming paradigm, the adoption and development of a software package system, designing by contract, distributed development and collaboration with other projects are elements of this project's success. Individually, each of these concepts are useful and important but when combined they have provided a strong basis for rapid development and deployment of innovative and flexible research software for scientific computation. A primary objective of this initiative is achievement of total remote reproducibility of novel algorithmic research results.
Resumo:
This thesis develops an effective modeling and simulation procedure for a specific thermal energy storage system commonly used and recommended for various applications (such as an auxiliary energy storage system for solar heating based Rankine cycle power plant). This thermal energy storage system transfers heat from a hot fluid (termed as heat transfer fluid - HTF) flowing in a tube to the surrounding phase change material (PCM). Through unsteady melting or freezing process, the PCM absorbs or releases thermal energy in the form of latent heat. Both scientific and engineering information is obtained by the proposed first-principle based modeling and simulation procedure. On the scientific side, the approach accurately tracks the moving melt-front (modeled as a sharp liquid-solid interface) and provides all necessary information about the time-varying heat-flow rates, temperature profiles, stored thermal energy, etc. On the engineering side, the proposed approach is unique in its ability to accurately solve – both individually and collectively – all the conjugate unsteady heat transfer problems for each of the components of the thermal storage system. This yields critical system level information on the various time-varying effectiveness and efficiency parameters for the thermal storage system.
Resumo:
Aims: Angiographic ectasias and aneurysms in stented segments have been associated with late stent thrombosis. Using optical coherence tomography (OCT), some stented segments show coronary evaginations reminiscent of ectasias. The purpose of this study was to explore, using computational fluid-dynamic (CFD) simulations, whether OCT-detected coronary evaginations can induce local changes in blood flow. Methods and results: OCT-detected evaginations are defined as outward bulges in the luminal vessel contour between struts, with the depth of the bulge exceeding the actual strut thickness. Evaginations can be characterised cross ectionally by depth and along the stented segment by total length. Assuming an ellipsoid shape, we modelled 3-D evaginations with different sizes by varying the depth from 0.2-1.0 mm, and the length from 1-9 mm. For the flow simulation we used average flow velocity data from non-diseased coronary arteries. The change in flow with varying evagination sizes was assessed using a particle tracing test where the particle transit time within the segment with evagination was compared with that of a control vessel. The presence of the evagination caused a delayed particle transit time which increased with the evagination size. The change in flow consisted locally of recirculation within the evagination, as well as flow deceleration due to a larger lumen - seen as a deflection of flow towards the evagination. Conclusions: CFD simulation of 3-D evaginations and blood flow suggests that evaginations affect flow locally, with a flow disturbance that increases with increasing evagination size.
Resumo:
The COSMIC-2 mission is a follow-on mission of the Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) with an upgraded payload for improved radio occultation (RO) applications. The objective of this paper is to develop a near-real-time (NRT) orbit determination system, called NRT National Chiao Tung University (NCTU) system, to support COSMIC-2 in atmospheric applications and verify the orbit product of COSMIC. The system is capable of automatic determinations of the NRT GPS clocks and LEO orbit and clock. To assess the NRT (NCTU) system, we use eight days of COSMIC data (March 24-31, 2011), which contain a total of 331 GPS observation sessions and 12 393 RO observable files. The parallel scheduling for independent GPS and LEO estimations and automatic time matching improves the computational efficiency by 64% compared to the sequential scheduling. Orbit difference analyses suggest a 10-cm accuracy for the COSMIC orbits from the NRT (NCTU) system, and it is consistent as the NRT University Corporation for Atmospheric Research (URCA) system. The mean velocity accuracy from the NRT orbits of COSMIC is 0.168 mm/s, corresponding to an error of about 0.051 μrad in the bending angle. The rms differences in the NRT COSMIC clock and in GPS clocks between the NRT (NCTU) and the postprocessing products are 3.742 and 1.427 ns. The GPS clocks determined from a partial ground GPS network [from NRT (NCTU)] and a full one [from NRT (UCAR)] result in mean rms frequency stabilities of 6.1E-12 and 2.7E-12, respectively, corresponding to range fluctuations of 5.5 and 2.4 cm and bending angle errors of 3.75 and 1.66 μrad .
Resumo:
PURPOSE To compare postoperative morphological and rheological conditions after eversion carotid endarterectomy versus conventional carotid endarterectomy using computational fluid dynamics. BASIC METHODS Hemodynamic metrics (velocity, wall shear stress, time-averaged wall shear stress and temporal gradient wall shear stress) in the carotid arteries were simulated in one patient after conventional carotid endarterectomy and one patient after eversion carotid endarterectomy by computational fluid dynamics analysis based on patient specific data. PRINCIPAL FINDINGS Systolic peak of the eversion carotid endarterectomy model showed a gradually decreased pressure along the stream path, the conventional carotid endarterectomy model revealed high pressure (about 180 Pa) at the carotid bulb. Regions of low wall shear stress in the conventional carotid endarterectomy model were much larger than that in the eversion carotid endarterectomy model and with lower time-averaged wall shear stress values (conventional carotid endarterectomy: 0.03-5.46 Pa vs. eversion carotid endarterectomy: 0.12-5.22 Pa). CONCLUSIONS Computational fluid dynamics after conventional carotid endarterectomy and eversion carotid endarterectomy disclosed differences in hemodynamic patterns. Larger studies are necessary to assess whether these differences are consistent and might explain different rates of restenosis in both techniques.
Resumo:
We investigate parallel algorithms for the solution of the Navier–Stokes equations in space-time. For periodic solutions, the discretized problem can be written as a large non-linear system of equations. This system of equations is solved by a Newton iteration. The Newton correction is computed using a preconditioned GMRES solver. The parallel performance of the algorithm is illustrated.