991 resultados para Task Modeling
Resumo:
Esitys KDK-käytettävyystyöryhmän järjestämässä seminaarissa: Miten käyttäjien toiveet haastavat metatietokäytäntöjämme? / How users' expectations challenge our metadata practices? 30.9.2014.
Resumo:
The diversity of algal banks composed of species out the genera Gracilaria Greville and Hypnea J.V. Lamouroux have been impacted by commercial exploitation and coastal eutrophication. The present study sought to construct dynamic models based on algal physiology to simulate seasonal variations in the biomasses of Gracilaria and Hypnea an intertidal reef at Piedade Beach in Jaboatão dos Guararapes, Pernambuco State, Brazil. Five 20 × 20 cm plots in a reef pool on a midlittoral reef platform were randomly sampled during April, June, August, October, and December/2009 and in January and March/2010. Water temperature, pH, irradiance, oxygen and salinity levels as well as the concentrations of ammonia, nitrate and phosphate were measured at the sampling site. Forcing functions were employed in the model to represent abiotic factors, and algal decay was simulated with a dispersal function. Algal growth was modeled using a logistic function and was found to be sensitive to temperature and salinity. Maximum absorption rates of ammonia and phosphate were higher in Hypnea than in Gracilaria, indicating that the former takes up nutrients more efficiently at higher concentrations. Gracilaria biomass peaked at approximately 120 g (dry weight m-2) in March/2010 and was significantly lower in August/2009; Hypnea biomasses, on the other hand, did not show any significant variations among the different months, indicating that resource competition may influence the productivity of these algae.
Resumo:
Training in step-down inhibitory avoidance (0.3-mA footshock) is followed by biochemical changes in rat hippocampus that strongly suggest an involvement of quantitative changes in glutamate AMPA receptors, followed by changes in the dopamine D1 receptor/cAMP/protein kinase A (PKA)/CREB-P signalling pathway in memory consolidation. AMPA binding to its receptor and levels of the AMPA receptor-specific subunit GluR1 increase in the hippocampus within the first 3 h after training (20-70%). Binding of the specific D1 receptor ligand, SCH23390, and cAMP levels increase within 3 or 6 h after training (30-100%). PKA activity and CREB-P levels show two peaks: a 35-40% increase 0 h after training, and a second increase 3-6 h later (35-60%). The results correlate with pharmacological findings showing an early post-training involvement of AMPA receptors, and a late involvement of the D1/cAMP/PKA/CREB-P pathway in memory consolidation of this task
Resumo:
Within the framework of the working memory model proposed by A. Baddeley and G. Hitch, a dual-task paradigm has been suggested to evaluate the capacity to perform simultaneously two concurrent tasks. This capacity is assumed to reflect the functioning of the central executive component, which appears to be impaired in patients with dysexecutive syndrome. The present study extends the investigation of an index ("mu"), which is supposed to indicate the capacity of coordination of concurrent auditory digit span and tracking tasks, by testing the influence of training on the performance in the dual task. The presentation of the same digit sequence lists or always-different lists did not differently affect the performance. The span length affected the mu values. The improved performance in the tasks under the dual condition closely resembled the improvement in the single-task performance. So, although training improved performance in the single and dual conditions, especially for the tracking component, the mu values remained stable throughout the sessions when the single tasks were performed first. Conversely, training improved the capacity of dual-task coordination throughout the sessions when dual task was performed first, addressing the issue of the contribution of the within-session practice to the mu index.
Resumo:
The measure "mu", proposed as an index of the ability to coordinate concurrent box-crossing (BC) and digit-span (DS) tasks in the dual task (DT), should reflect the capacity of the executive component of the working memory system. We investigated the effect of practice in BC and of a change in the digit span on mu by adding previous practice trials in BC and diminishing, maintaining or increasing the digit sequence length. The mu behavior was evaluated throughout three trials of the test. Reported strategies in digit tasks were also analyzed. Subjects with diminished span showed the best performance in DT due to a stable performance in DS and BC in the single- and dual-task conditions. These subjects also showed a more stable performance throughout trials. Subjects with diminished span tended to employ effortless strategies, whereas subjects with increased span employed effort-requiring strategies and showed the lowest means of mu. Subjects with initial practice trials showed the best performance in BC and the most differentiated performance between the single- and dual-task conditions in BC. The correlation coefficient between the mu values obtained in the first and second trials was 0.814 for subjects with diminished span and practice trials in BC. It seems that the within-session practice in BC and the performance variability in DS affect the reliability of the index mu. To control these factors we propose the introduction of previous practice trials in BC and a modification of the current method to determine the digit sequence length. This proposal should contribute to the development of a more reliable method to evaluate the executive capacity of coordination in the dual-task paradigm.
Resumo:
The application of computational fluid dynamics (CFD) and finite element analysis (FEA) has been growing rapidly in the various fields of science and technology. One of the areas of interest is in biomedical engineering. The altered hemodynamics inside the blood vessels plays a key role in the development of the arterial disease called atherosclerosis, which is the major cause of human death worldwide. Atherosclerosis is often treated with the stenting procedure to restore the normal blood flow. A stent is a tubular, flexible structure, usually made of metals, which is driven and expanded in the blocked arteries. Despite the success rate of the stenting procedure, it is often associated with the restenosis (re-narrowing of the artery) process. The presence of non-biological device in the artery causes inflammation or re-growth of atherosclerotic lesions in the treated vessels. Several factors including the design of stents, type of stent expansion, expansion pressure, morphology and composition of vessel wall influence the restenosis process. Therefore, the role of computational studies is crucial in the investigation and optimisation of the factors that influence post-stenting complications. This thesis focuses on the stent-vessel wall interactions followed by the blood flow in the post-stenting stage of stenosed human coronary artery. Hemodynamic and mechanical stresses were analysed in three separate stent-plaque-artery models. Plaque was modeled as a multi-layer (fibrous cap (FC), necrotic core (NC), and fibrosis (F)) and the arterial wall as a single layer domain. CFD/FEA simulations were performed using commercial software packages in several models mimicking the various stages and morphologies of atherosclerosis. The tissue prolapse (TP) of stented vessel wall, the distribution of von Mises stress (VMS) inside various layers of vessel wall, and the wall shear stress (WSS) along the luminal surface of the deformed vessel wall were measured and evaluated. The results revealed the role of the stenosis size, thickness of each layer of atherosclerotic wall, thickness of stent strut, pressure applied for stenosis expansion, and the flow condition in the distribution of stresses. The thicknesses of FC, and NC and the total thickness of plaque are critical in controlling the stresses inside the tissue. A small change in morphology of artery wall can significantly affect the distribution of stresses. In particular, FC is the most sensitive layer to TP and stresses, which could determine plaque’s vulnerability to rupture. The WSS is highly influenced by the deflection of artery, which in turn is dependent on the structural composition of arterial wall layers. Together with the stenosis size, their roles could play a decisive role in controlling the low values of WSS (<0.5 Pa) prone to restenosis. Moreover, the time dependent flow altered the percentage of luminal area with WSS values less than 0.5 Pa at different time instants. The non- Newtonian viscosity model of the blood properties significantly affects the prediction of WSS magnitude. The outcomes of this investigation will help to better understand the roles of the individual layers of atherosclerotic vessels and their risk to provoke restenosis at the post-stenting stage. As a consequence, the implementation of such an approach to assess the post-stented stresses will assist the engineers and clinicians in optimizing the stenting techniques to minimize the occurrence of restenosis.
Resumo:
Fluid particle breakup and coalescence are important phenomena in a number of industrial flow systems. This study deals with a gas-liquid bubbly flow in one wastewater cleaning application. Three-dimensional geometric model of a dispersion water system was created in ANSYS CFD meshing software. Then, numerical study of the system was carried out by means of unsteady simulations performed in ANSYS FLUENT CFD software. Single-phase water flow case was setup to calculate the entire flow field using the RNG k-epsilon turbulence model based on the Reynolds-averaged Navier-Stokes (RANS) equations. Bubbly flow case was based on a computational fluid dynamics - population balance model (CFD-PBM) coupled approach. Bubble breakup and coalescence were considered to determine the evolution of the bubble size distribution. Obtained results are considered as steps toward optimization of the cleaning process and will be analyzed in order to make the process more efficient.
Resumo:
Linguistic modelling is a rather new branch of mathematics that is still undergoing rapid development. It is closely related to fuzzy set theory and fuzzy logic, but knowledge and experience from other fields of mathematics, as well as other fields of science including linguistics and behavioral sciences, is also necessary to build appropriate mathematical models. This topic has received considerable attention as it provides tools for mathematical representation of the most common means of human communication - natural language. Adding a natural language level to mathematical models can provide an interface between the mathematical representation of the modelled system and the user of the model - one that is sufficiently easy to use and understand, but yet conveys all the information necessary to avoid misinterpretations. It is, however, not a trivial task and the link between the linguistic and computational level of such models has to be established and maintained properly during the whole modelling process. In this thesis, we focus on the relationship between the linguistic and the mathematical level of decision support models. We discuss several important issues concerning the mathematical representation of meaning of linguistic expressions, their transformation into the language of mathematics and the retranslation of mathematical outputs back into natural language. In the first part of the thesis, our view of the linguistic modelling for decision support is presented and the main guidelines for building linguistic models for real-life decision support that are the basis of our modeling methodology are outlined. From the theoretical point of view, the issues of representation of meaning of linguistic terms, computations with these representations and the retranslation process back into the linguistic level (linguistic approximation) are studied in this part of the thesis. We focus on the reasonability of operations with the meanings of linguistic terms, the correspondence of the linguistic and mathematical level of the models and on proper presentation of appropriate outputs. We also discuss several issues concerning the ethical aspects of decision support - particularly the loss of meaning due to the transformation of mathematical outputs into natural language and the issue or responsibility for the final decisions. In the second part several case studies of real-life problems are presented. These provide background and necessary context and motivation for the mathematical results and models presented in this part. A linguistic decision support model for disaster management is presented here – formulated as a fuzzy linear programming problem and a heuristic solution to it is proposed. Uncertainty of outputs, expert knowledge concerning disaster response practice and the necessity of obtaining outputs that are easy to interpret (and available in very short time) are reflected in the design of the model. Saaty’s analytic hierarchy process (AHP) is considered in two case studies - first in the context of the evaluation of works of art, where a weak consistency condition is introduced and an adaptation of AHP for large matrices of preference intensities is presented. The second AHP case-study deals with the fuzzified version of AHP and its use for evaluation purposes – particularly the integration of peer-review into the evaluation of R&D outputs is considered. In the context of HR management, we present a fuzzy rule based evaluation model (academic faculty evaluation is considered) constructed to provide outputs that do not require linguistic approximation and are easily transformed into graphical information. This is achieved by designing a specific form of fuzzy inference. Finally the last case study is from the area of humanities - psychological diagnostics is considered and a linguistic fuzzy model for the interpretation of outputs of multidimensional questionnaires is suggested. The issue of the quality of data in mathematical classification models is also studied here. A modification of the receiver operating characteristics (ROC) method is presented to reflect variable quality of data instances in the validation set during classifier performance assessment. Twelve publications on which the author participated are appended as a third part of this thesis. These summarize the mathematical results and provide a closer insight into the issues of the practicalapplications that are considered in the second part of the thesis.
Resumo:
This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.
Resumo:
Gasification of biomass is an efficient method process to produce liquid fuels, heat and electricity. It is interesting especially for the Nordic countries, where raw material for the processes is readily available. The thermal reactions of light hydrocarbons are a major challenge for industrial applications. At elevated temperatures, light hydrocarbons react spontaneously to form higher molecular weight compounds. In this thesis, this phenomenon was studied by literature survey, experimental work and modeling effort. The literature survey revealed that the change in tar composition is likely caused by the kinetic entropy. The role of the surface material is deemed to be an important factor in the reactivity of the system. The experimental results were in accordance with previous publications on the subject. The novelty of the experimental work lies in the used time interval for measurements combined with an industrially relevant temperature interval. The aspects which are covered in the modeling include screening of possible numerical approaches, testing of optimization methods and kinetic modelling. No significant numerical issues were observed, so the used calculation routines are adequate for the task. Evolutionary algorithms gave a better performance combined with better fit than the conventional iterative methods such as Simplex and Levenberg-Marquardt methods. Three models were fitted on experimental data. The LLNL model was used as a reference model to which two other models were compared. A compact model which included all the observed species was developed. The parameter estimation performed on that model gave slightly impaired fit to experimental data than LLNL model, but the difference was barely significant. The third tested model concentrated on the decomposition of hydrocarbons and included a theoretical description of the formation of carbon layer on the reactor walls. The fit to experimental data was extremely good. Based on the simulation results and literature findings, it is likely that the surface coverage of carbonaceous deposits is a major factor in thermal reactions.
Resumo:
In this Master Thesis the characteristics of the chosen fractal microstrip antennas are investigated. For modeling has been used the structure of the square Serpinsky fractal curves. During the elaboration of this Master thesis the following steps were undertaken: 1) calculation and simulation of square microstrip antennа, 2) optimizing for obtaining the required characteristics on the frequency 2.5 GHz, 3) simulation and calculation of the second and third iteration of the Serpinsky fractal curves, 4) radiation patterns and intensity distribution of these antennas. In this Master’s Thesis the search for the optimal position of the port and fractal elements was conducted. These structures can be used in perspective for creation of antennas working at the same time in different frequency range.
Resumo:
Malaria continues to infect millions and kill hundreds of thousands of people worldwide each year, despite over a century of research and attempts to control and eliminate this infectious disease. Challenges such as the development and spread of drug resistant malaria parasites, insecticide resistance to mosquitoes, climate change, the presence of individuals with subpatent malaria infections which normally are asymptomatic and behavioral plasticity in the mosquito hinder the prospects of malaria control and elimination. In this thesis, mathematical models of malaria transmission and control that address the role of drug resistance, immunity, iron supplementation and anemia, immigration and visitation, and the presence of asymptomatic carriers in malaria transmission are developed. A within-host mathematical model of severe Plasmodium falciparum malaria is also developed. First, a deterministic mathematical model for transmission of antimalarial drug resistance parasites with superinfection is developed and analyzed. The possibility of increase in the risk of superinfection due to iron supplementation and fortification in malaria endemic areas is discussed. The model results calls upon stakeholders to weigh the pros and cons of iron supplementation to individuals living in malaria endemic regions. Second, a deterministic model of transmission of drug resistant malaria parasites, including the inflow of infective immigrants, is presented and analyzed. The optimal control theory is applied to this model to study the impact of various malaria and vector control strategies, such as screening of immigrants, treatment of drug-sensitive infections, treatment of drug-resistant infections, and the use of insecticide-treated bed nets and indoor spraying of mosquitoes. The results of the model emphasize the importance of using a combination of all four controls tools for effective malaria intervention. Next, a two-age-class mathematical model for malaria transmission with asymptomatic carriers is developed and analyzed. In development of this model, four possible control measures are analyzed: the use of long-lasting treated mosquito nets, indoor residual spraying, screening and treatment of symptomatic, and screening and treatment of asymptomatic individuals. The numerical results show that a disease-free equilibrium can be attained if all four control measures are used. A common pitfall for most epidemiological models is the absence of real data; model-based conclusions have to be drawn based on uncertain parameter values. In this thesis, an approach to study the robustness of optimal control solutions under such parameter uncertainty is presented. Numerical analysis of the optimal control problem in the presence of parameter uncertainty demonstrate the robustness of the optimal control approach that: when a comprehensive control strategy is used the main conclusions of the optimal control remain unchanged, even if inevitable variability remains in the control profiles. The results provide a promising framework for the design of cost-effective strategies for disease control with multiple interventions, even under considerable uncertainty of model parameters. Finally, a separate work modeling the within-host Plasmodium falciparum infection in humans is presented. The developed model allows re-infection of already-infected red blood cells. The model hypothesizes that in severe malaria due to parasite quest for survival and rapid multiplication, the Plasmodium falciparum can be absorbed in the already-infected red blood cells which accelerates the rupture rate and consequently cause anemia. Analysis of the model and parameter identifiability using Markov chain Monte Carlo methods is presented.
Resumo:
The interactions between the median raphe nucleus (MRN) serotonergic system and the septohippocampal muscarinic cholinergic system in the modulation of immediate working memory storage performance were investigated. Rats with sham or ibotenic acid lesions of the MRN were bilaterally implanted with cannulae in the dentate gyrus of the hippocampus and tested in a light/dark step-through inhibitory avoidance task in which response latency to enter the dark compartment immediately after the shock served as a measure of immediate working memory storage. MRN lesion per se did not alter response latency. Post-training intrahippocampal scopolamine infusion (2 and 4 µg/side) produced a more marked reduction in response latencies in the lesioned animals compared to the sham-lesioned rats. Results suggest that the immediate working memory storage performance is modulated by synergistic interactions between serotonergic projections of the MRN and the muscarinic cholinergic system of the hippocampus.
Resumo:
The Large Hadron Collider (LHC) in The European Organization for Nuclear Research (CERN) will have a Long Shutdown sometime during 2017 or 2018. During this time there will be maintenance and a possibility to install new detectors. After the shutdown the LHC will have a higher luminosity. A promising new type of detector for this high luminosity phase is a Triple-GEM detector. During the shutdown these detectors will be installed at the Compact Muon Solenoid (CMS) experiment. The Triple-GEM detectors are now being developed at CERN and alongside also a readout ASIC chip for the detector. In this thesis a simulation model was developed for the ASICs analog front end. The model will help to carry out more extensive simulations and also simulate the whole chip before the whole design is finished. The proper functioning of the model was tested with simulations, which are also presented in the thesis.