916 resultados para Long memory stochastic process
Resumo:
Vaccination aims at generating memory immune responses able to protect individuals against pathogenic challenges over long periods of time. Subunit vaccine formulations based on safe, but poorly immunogenic, antigenic entities must be combined with adjuvant molecules to make them efficient against infections. We have previously shown that gas-filled microbubbles (MB) are potent antigen-delivery systems. This study compares the ability of various ovalbumin-associated MB (OVA-MB) formulations to induce antigen-specific memory immune responses and evaluates long-term protection toward bacterial infections. When initially testing dendritic cells reactivity to MB constituents, palmitic acid exhibited the highest degree of activation. Subcutaneous immunization of naïve wild-type mice with the OVA-MB formulation comprising the highest palmitic acid content and devoid of PEG2000 was found to trigger the more pronounced Th1-type response, as reflected by robust IFN-γ and IL-2 production. Both T cell and antibody responses persisted for at least 6 months after immunization. At that time, systemic infection with OVA-expressing Listeria monocytgenes was performed. Partial protection of vaccinated mice was demonstrated by reduction of the bacterial load in both the spleen and liver. We conclude that antigen-bound MB exhibit promising properties as a vaccine candidate ensuring prolonged maintenance of protective immunity.
Resumo:
Species may cope with rapid habitat changes by distribution shifts or adaptation to new conditions. A common feature of these responses is that they depend on how the process of dispersal connects populations, both demographically and genetically. We analyzed the genetic structure of a near-threatened high-Arctic seabird, the ivory gull (Pagophila eburnea) in order to infer the connectivity among gull colonies. We analyzed 343 individuals sampled from 16 localities across the circumpolar breeding range of ivory gulls, from northern Russia to the Canadian Arctic. To explore the roles of natal and breeding dispersal, we developed a population genetic model to relate dispersal behavior to the observed genetic structure of worldwide ivory gull populations. Our key finding is the striking genetic homogeneity of ivory gulls across their entire distribution range. The lack of population genetic structure found among colonies, in tandem with independent evidence of movement among colonies, suggests that ongoing effective dispersal is occurring across the Arctic Region. Our results contradict the dispersal patterns generally observed in seabirds where species movement capabilities are often not indicative of dispersal patterns. Model predictions show how natal and breeding dispersal may combine to shape the genetic homogeneity among ivory gull colonies separated by up to 2800 km. Although field data will be key to determine the role of dispersal for the demography of local colonies and refine the respective impacts of natal versus breeding dispersal, conservation planning needs to consider ivory gulls as a genetically homogeneous, Arctic-wide metapopulation effectively connected through dispersal.
Resumo:
The stochastic convergence amongst Mexican Federal entities is analyzed in panel data framework. The joint consideration of cross-section dependence and multiple structural breaks is required to ensure that the statistical inference is based on statistics with good statistical properties. Once these features are accounted for, evidence in favour of stochastic convergence is found. Since stochastic convergence is a necessary, yet insufficient condition for convergence as predicted by economic growth models, the paper also investigates whether-convergence process has taken place. We found that the Mexican states have followed either heterogeneous convergence patterns or divergence process throughout the analyzed period.
Resumo:
The aim of the thesis is to devise a framework for analyzing simulation games, in particular introductory supply chain simulation games which are used in education and process development. The framework is then applied to three case examples which are introductory supply chain simulation games used at Lappeenranta University of Technology. The theoretical part of the thesis studies simulation games in the context of education and training as well as of process management. Simulation games can be seen as learning processes which comprise of briefing, micro cycle, and debriefing which includes observation and reflection as well as conceptualization. The micro cycle, i.e. the game itself, is defined through elements and characteristics. Both briefing and debriefing ought to support the micro cycle. The whole learning process needs to support learning objectives of the simulation game. Based on the analysis of the case simulation games, suggestions on how to boost the debriefing and promote long term effects of the games are made. In addition, a framework is suggested to be used in designing simulation games and characteristics of introductory supply chain simulation games are defined. They are designed for general purposes, are simple and operated manually, are multifunctional interplays, and last about 2.5 4 hours. Participants co operate during a game run and competition arises between different runs or game sessions.
Resumo:
The human language-learning ability persists throughout life, indicating considerable flexibility at the cognitive and neural level. This ability spans from expanding the vocabulary in the mother tongue to acquisition of a new language with its lexicon and grammar. The present thesis consists of five studies that tap both of these aspects of adult language learning by using magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) during language processing and language learning tasks. The thesis shows that learning novel phonological word forms, either in the native tongue or when exposed to a foreign phonology, activates the brain in similar ways. The results also show that novel native words readily become integrated in the mental lexicon. Several studies in the thesis highlight the left temporal cortex as an important brain region in learning and accessing phonological forms. Incidental learning of foreign phonological word forms was reflected in functionally distinct temporal lobe areas that, respectively, reflected short-term memory processes and more stable learning that persisted to the next day. In a study where explicitly trained items were tracked for ten months, it was found that enhanced naming-related temporal and frontal activation one week after learning was predictive of good long-term memory. The results suggest that memory maintenance is an active process that depends on mechanisms of reconsolidation, and that these process vary considerably between individuals. The thesis put special emphasis on studying language learning in the context of language production. The neural foundation of language production has been studied considerably less than that of perceptive language, especially on the sentence level. A well-known paradigm in language production studies is picture naming, also used as a clinical tool in neuropsychology. This thesis shows that accessing the meaning and phonological form of a depicted object are subserved by different neural implementations. Moreover, a comparison between action and object naming from identical images indicated that the grammatical class of the retrieved word (verb, noun) is less important than the visual content of the image. In the present thesis, the picture naming was further modified into a novel paradigm in order to probe sentence-level speech production in a newly learned miniature language. Neural activity related to grammatical processing did not differ between the novel language and the mother tongue, but stronger neural activation for the novel language was observed during the planning of the upcoming output, likely related to more demanding lexical retrieval and short-term memory. In sum, the thesis aimed at examining language learning by combining different linguistic domains, such as phonology, semantics, and grammar, in a dynamic description of language processing in the human brain.
Resumo:
Paper presented at the 40th Annual Conference of LIBER (Ligue des Bibliothèques Européennes de Recherche - Association of European Research Libraries) on July 1st, 2011; with the slides used at the presentation.
Resumo:
This thesis considers modeling and analysis of noise and interconnects in onchip communication. Besides transistor count and speed, the capabilities of a modern design are often limited by on-chip communication links. These links typically consist of multiple interconnects that run parallel to each other for long distances between functional or memory blocks. Due to the scaling of technology, the interconnects have considerable electrical parasitics that affect their performance, power dissipation and signal integrity. Furthermore, because of electromagnetic coupling, the interconnects in the link need to be considered as an interacting group instead of as isolated signal paths. There is a need for accurate and computationally effective models in the early stages of the chip design process to assess or optimize issues affecting these interconnects. For this purpose, a set of analytical models is developed for on-chip data links in this thesis. First, a model is proposed for modeling crosstalk and intersymbol interference. The model takes into account the effects of inductance, initial states and bit sequences. Intersymbol interference is shown to affect crosstalk voltage and propagation delay depending on bus throughput and the amount of inductance. Next, a model is proposed for the switching current of a coupled bus. The model is combined with an existing model to evaluate power supply noise. The model is then applied to reduce both functional crosstalk and power supply noise caused by a bus as a trade-off with time. The proposed reduction method is shown to be effective in reducing long-range crosstalk noise. The effects of process variation on encoded signaling are then modeled. In encoded signaling, the input signals to a bus are encoded using additional signaling circuitry. The proposed model includes variation in both the signaling circuitry and in the wires to calculate the total delay variation of a bus. The model is applied to study level-encoded dual-rail and 1-of-4 signaling. In addition to regular voltage-mode and encoded voltage-mode signaling, current-mode signaling is a promising technique for global communication. A model for energy dissipation in RLC current-mode signaling is proposed in the thesis. The energy is derived separately for the driver, wire and receiver termination.
Resumo:
In any decision making under uncertainties, the goal is mostly to minimize the expected cost. The minimization of cost under uncertainties is usually done by optimization. For simple models, the optimization can easily be done using deterministic methods.However, many models practically contain some complex and varying parameters that can not easily be taken into account using usual deterministic methods of optimization. Thus, it is very important to look for other methods that can be used to get insight into such models. MCMC method is one of the practical methods that can be used for optimization of stochastic models under uncertainty. This method is based on simulation that provides a general methodology which can be applied in nonlinear and non-Gaussian state models. MCMC method is very important for practical applications because it is a uni ed estimation procedure which simultaneously estimates both parameters and state variables. MCMC computes the distribution of the state variables and parameters of the given data measurements. MCMC method is faster in terms of computing time when compared to other optimization methods. This thesis discusses the use of Markov chain Monte Carlo (MCMC) methods for optimization of Stochastic models under uncertainties .The thesis begins with a short discussion about Bayesian Inference, MCMC and Stochastic optimization methods. Then an example is given of how MCMC can be applied for maximizing production at a minimum cost in a chemical reaction process. It is observed that this method performs better in optimizing the given cost function with a very high certainty.
Resumo:
As long as the incidence of stroke continues to grow, patients with large right hemisphere lesions suffering from hemispatial neglect will require neuropsychological evaluation and rehabilitation. The inability to process information especially that coming from the left side accompanied by the magnetic orientation to the ipsilesional side represents a real challenge for rehabilitation. This dissertation is concerned with crucial aspects in the clinical neuropsychological practice of hemispatial neglect. In studying the convergence of the visual and behavioural test batteries in the assessment of neglect, nine of the seventeen patients, who completed both the conventional subtests of the Behavioural Inattention Test and the Catherine Bergego Scale assessments, showed a similar severity of neglect and thus good convergence in both tests. However, patients with neglect and hemianopia had poorer scores in the line bisection test and they displayed stronger neglect in behaviour than patients with pure neglect. The second study examined, whether arm activation, modified from the Constraint Induced Movement Therapy, could be applied as neglect rehabilitation alone without any visual training. Twelve acute- or subacute patients were randomized into two rehabilitation groups: arm activation training or traditional voluntary visual scanning training. Neglect was ameliorated significantly or almost significantly in both training groups due to rehabilitation with the effect being maintained for at least six months. In studying the reflections of hemispatial neglect on visual memory, the associations of severity of neglect and visual memory performances were explored. The performances of acute and subacute patients with hemispatial neglect were compared with the performances of matched healthy control subjects. As hypothesized, encoding from the left side and immediate recall of visual material were significantly compromised in patients with neglect. Another mechanism of neglect affecting visual memory processes is observed in delayed visual reproduction. Delayed recall demands that the individual must make a match helped by a cue or it requires a search for relevant material from long-term memory storage. In the case of representational neglect, the search may succeed but the left side of the recollected memory still fails to open. Visual and auditory evoked potentials were measured in 21 patients with hemispatial neglect. Stimuli coming from the left or right were processed differently in both sensory modalities in acute and subacute patients as compared with the chronic patients. The differences equalized during the course of recovery. Recovery from hemispatial neglect was strongly associated with early rehabilitation and with the severity of neglect. Extinction was common in patients with neglect and it did not ameliorate with the recovery of neglect. The presence of pusher symptom hampered amelioration of visual neglect in acute and subacute stroke patients, whereas depression did not have any significant effect in the early phases after the stroke. However, depression had an unfavourable effect on recovery in the chronic phase. In conclusion, the combination of neglect and hemianopia may explain part of the residual behavioural neglect that is no longer evident in visual testing. Further research is needed in order to determine which specific rehabilitation procedures would be most beneficial in patients suffering the combination of neglect and hemianopia. Arm activation should be included in the rehabilitation programs of neglect; this is a useful technique for patients who need bedside treatment in the acute phase. With respect to the deficit in visual memory in association with neglect, the possible mechanisms of lateralized deficit in delayed recall need to be further examined and clarified. Intensive treatment induced recovery in both severe and moderate visual neglect long after the first two to first three months after the stroke.
Resumo:
The aim of the thesis was to study quality management with process approach and to find out how to utilize process management to improve quality. The operating environment of organizations has changed. Organizations are focusing on their core competences and networking with suppliers and customers to ensure more effective and efficient value creation for the end customer. Quality management is moving from inspection of the output to prevention of problems from occurring in the first place and management thinking is changing from functional approach to process approach. In the theoretical part of the thesis, it is studied how to define quality, how to achieve good quality, how to improve quality, and how to make sure the improvement goes on as never ending cycle. A selection of quality tools is introduced. Process approach to quality management is described and compared to functional approach, which is the traditional way to manage operations and quality. The customer focus is also studied, and it is presented, that to ensure long term customer commitment, organization needs to react to changing customer requirements and wishes by constantly improving the processes. In the experimental part the theories are tested in a process improvement business case. It is shown how to execute a process improvement project starting from defining the customer requirements, continuing to defining the process ownership, roles and responsibilities, boundaries, interfaces and the actual process activities. The control points and measures are determined for the process, as well as the feedback and corrective action process, to ensure continual improvement can be achieved and to enable verification that customer requirements are fulfilled.
Resumo:
Quite often, in the construction of a pulp mill involves establishing the size of tanks which will accommodate the material from the various processes in which case estimating the right tank size a priori would be vital. Hence, simulation of the whole production process would be worthwhile. Therefore, there is need to develop mathematical models that would mimic the behavior of the output from the various production units of the pulp mill to work as simulators. Markov chain models, Autoregressive moving average (ARMA) model, Mean reversion models with ensemble interaction together with Markov regime switching models are proposed for that purpose.
Resumo:
Japan has been a major actor in the field of development cooperation for five decades, even holding the title of largest donor of Official Development Assistance (ODA) during the 1990s. Financial flows, however, are subject to pre-existing paradigms that dictate both donor and recipient behaviour. In this respect Japan has been left wanting for more recognition. The dominance of the so called ‘Washington Consensus’ embodied in the International Monetary Fund (IMF) and the World Bank has long circumvented any indigenous approaches to development problems. The Tokyo International Conference on African Development (TICAD) is a development cooperation conference that Japan has hosted since 1993 every five years. As the main organizer of the conference Japan has opted for the leading position of African development. This has come in the wake of success in the Asian region where Japan has called attention to its role in the so called ‘Asian Miracle’ of fast growing economies. These aspirations have enabled Japan to try asserting itself as a major player in directing the course of global development discourse using historical narratives from both Asia and Africa. Over the years TICAD has evolved into a continuous process with ministerial and follow-up meetings in between conferences. Each conference has produced a declaration that stipulates the way the participants approach the question of African development. Although a multilateral framework, Japan has over the years made its presence more and more felt within the process. This research examines the way Japan approaches the paradigms of international development cooperation and tries to direct them in the context of the TICAD process. Supplementing these questions are inquiries concerning Japan’s foreign policy aspirations. The research shows that Japan has utilized the conference platform to contest other development actors and especially the dominant forces of the IMF and the World Bank in development discourse debate. Japan’s dominance of the process is evident in the narratives found in the conference documents. Relative success has come about by remaining consistent as shown by the acceptance of items from the TICAD agenda in other forums, such as the G8. But the emergence of new players such as China has changed the playing field, as they are engaging other developing countries from a more equal level.
Resumo:
Stochastic differential equation (SDE) is a differential equation in which some of the terms and its solution are stochastic processes. SDEs play a central role in modeling physical systems like finance, Biology, Engineering, to mention some. In modeling process, the computation of the trajectories (sample paths) of solutions to SDEs is very important. However, the exact solution to a SDE is generally difficult to obtain due to non-differentiability character of realizations of the Brownian motion. There exist approximation methods of solutions of SDE. The solutions will be continuous stochastic processes that represent diffusive dynamics, a common modeling assumption for financial, Biology, physical, environmental systems. This Masters' thesis is an introduction and survey of numerical solution methods for stochastic differential equations. Standard numerical methods, local linearization methods and filtering methods are well described. We compute the root mean square errors for each method from which we propose a better numerical scheme. Stochastic differential equations can be formulated from a given ordinary differential equations. In this thesis, we describe two kind of formulations: parametric and non-parametric techniques. The formulation is based on epidemiological SEIR model. This methods have a tendency of increasing parameters in the constructed SDEs, hence, it requires more data. We compare the two techniques numerically.
Resumo:
We investigated the long-lasting effect of peripheral injection of the neuropeptide substance P (SP) and of some N- or C-terminal SP fragments (SPN and SPC, respectively) on retention test performance of avoidance learning. Male Wistar rats (220 to 280 g) were trained in an inhibitory step-down avoidance task and tested 24 h or 21 days later. Immediately after the training trial rats received an intraperitoneal injection of SP (50 µg/kg), SPN 1-7 (167 µg/kg) or SPC 7-11 (134 µg/kg). Control groups were injected with vehicle or SP 5 h after the training trial. The immediate post-training administration of SP and SPN, but not SPC, facilitated avoidance behavior in rats tested 24 h or 21 days later, i.e., the retention test latencies of the SP and SPN groups were significantly longer (P<0.05, Mann-Whitney U-test) during both training-test intervals. These observations suggest that the memory-enhancing effect of SP is long-lasting and that the amino acid sequence responsible for this effect is encoded by its N-terminal part
Resumo:
Male Wistar rats were trained in one-trial step-down inhibitory avoidance using a 0.4-mA footshock. At various times after training (0, 1.5, 3, 6 and 9 h for the animals implanted into the CA1 region of the hippocampus; 0 and 3 h for those implanted into the amygdala), these animals received microinfusions of SKF38393 (7.5 µg/side), SCH23390 (0.5 µg/side), norepinephrine (0.3 µg/side), timolol (0.3 µg/side), 8-OH-DPAT (2.5 µg/side), NAN-190 (2.5 µg/side), forskolin (0.5 µg/side), KT5720 (0.5 µg/side) or 8-Br-cAMP (1.25 µg/side). Rats were tested for retention 24 h after training. When given into the hippocampus 0 h post-training, norepinephrine enhanced memory whereas KT5720 was amnestic. When given 1.5 h after training, all treatments were ineffective. When given 3 or 6 h post-training, 8-Br-cAMP, forskolin, SKF38393, norepinephrine and NAN-190 caused memory facilitation, while KT5720, SCH23390, timolol and 8-OH-DPAT caused retrograde amnesia. Again, at 9 h after training, all treatments were ineffective. When given into the amygdala, norepinephrine caused retrograde facilitation at 0 h after training. The other drugs infused into the amygdala did not cause any significant effect. These data suggest that in the hippocampus, but not in the amygdala, a cAMP/protein kinase A pathway is involved in memory consolidation at 3 and 6 h after training, which is regulated by D1, ß, and 5HT1A receptors. This correlates with data on increased post-training cAMP levels and a dual peak of protein kinase A activity and CREB-P levels (at 0 and 3-6 h) in rat hippocampus after training in this task. These results suggest that the hippocampus, but not the amygdala, is involved in long-term storage of step-down inhibitory avoidance in the rat.