807 resultados para Mathematical ability


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In spite of the movement to turn political science into a real science, various mathematical methods that are now the staples of physics, biology, and even economics are thoroughly uncommon in political science, especially the study of civil war. This study seeks to apply such methods - specifically, ordinary differential equations (ODEs) - to model civil war based on what one might dub the capabilities school of thought, which roughly states that civil wars end only when one side’s ability to make war falls far enough to make peace truly attractive. I construct several different ODE-based models and then test them all to see which best predicts the instantaneous capabilities of both sides of the Sri Lankan civil war in the period from 1990 to 1994 given parameters and initial conditions. The model that the tests declare most accurate gives very accurate predictions of state military capabilities and reasonable short term predictions of cumulative deaths. Analysis of the model reveals the scale of the importance of rebel finances to the sustainability of insurgency, most notably that the number of troops required to put down the Tamil Tigers is reduced by nearly a full order of magnitude when Tiger foreign funding is stopped. The study thus demonstrates that accurate foresight may come of relatively simple dynamical models, and implies the great potential of advanced and currently unconventional non-statistical mathematical methods in political science.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work reports the developnent of a mathenatical model and distributed, multi variable computer-control for a pilot plant double-effect climbing-film evaporator. A distributed-parameter model of the plant has been developed and the time-domain model transformed into the Laplace domain. The model has been further transformed into an integral domain conforming to an algebraic ring of polynomials, to eliminate the transcendental terms which arise in the Laplace domain due to the distributed nature of the plant model. This has made possible the application of linear control theories to a set of linear-partial differential equations. The models obtained have well tracked the experimental results of the plant. A distributed-computer network has been interfaced with the plant to implement digital controllers in a hierarchical structure. A modern rnultivariable Wiener-Hopf controller has been applled to the plant model. The application has revealed a limitation condition that the plant matrix should be positive-definite along the infinite frequency axis. A new multi variable control theory has emerged fram this study, which avoids the above limitation. The controller has the structure of the modern Wiener-Hopf controller, but with a unique feature enabling a designer to specify the closed-loop poles in advance and to shape the sensitivity matrix as required. In this way, the method treats directly the interaction problems found in the chemical processes with good tracking and regulation performances. Though the ability of the analytical design methods to determine once and for all whether a given set of specifications can be met is one of its chief advantages over the conventional trial-and-error design procedures. However, one disadvantage that offsets to some degree the enormous advantages is the relatively complicated algebra that must be employed in working out all but the simplest problem. Mathematical algorithms and computer software have been developed to treat some of the mathematical operations defined over the integral domain, such as matrix fraction description, spectral factorization, the Bezout identity, and the general manipulation of polynomial matrices. Hence, the design problems of Wiener-Hopf type of controllers and other similar algebraic design methods can be easily solved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is a theoretical study of the accuracy and usability of models that attempt to represent the environmental control system of buildings in order to improve environmental design. These models have evolved from crude representations of a building and its environment through to an accurate representation of the dynamic characteristics of the environmental stimuli on buildings. Each generation of models has had its own particular influence on built form. This thesis analyses the theory, structure and data of such models in terms of their accuracy of simulation and therefore their validity in influencing built form. The models are also analysed in terms of their compatability with the design process and hence their ability to aid designers. The conclusions are that such models are unlikely to improve environmental performance since: a the models can only be applied to a limited number of building types, b they can only be applied to a restricted number of the characteristics of a design, c they can only be employed after many major environmental decisions have been made, d the data used in models is inadequate and unrepresentative, e models do not account for occupant interaction in environmental control. It is argued that further improvements in the accuracy of simulation of environmental control will not significantly improve environmental design. This is based on the premise that strategic environmental decisions are made at the conceptual stages of design whereas models influence the detailed stages of design. It is hypothesised that if models are to improve environmental design it must be through the analysis of building typologies which provides a method of feedback between models and the conceptual stages of design. Field studies are presented to describe a method by which typologies can be analysed and a theoretical framework is described which provides a basis for further research into the implications of the morphology of buildings on environmental design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although tyrosine kinase inhibitors (TKIs) such as imatinib have transformed chronic myelogenous leukemia (CML) into a chronic condition, these therapies are not curative in the majority of cases. Most patients must continue TKI therapy indefinitely, a requirement that is both expensive and that compromises a patient's quality of life. While TKIs are known to reduce leukemic cells' proliferative capacity and to induce apoptosis, their effects on leukemic stem cells, the immune system, and the microenvironment are not fully understood. A more complete understanding of their global therapeutic effects would help us to identify any limitations of TKI monotherapy and to address these issues through novel combination therapies. Mathematical models are a complementary tool to experimental and clinical data that can provide valuable insights into the underlying mechanisms of TKI therapy. Previous modeling efforts have focused on CML patients who show biphasic and triphasic exponential declines in BCR-ABL ratio during therapy. However, our patient data indicates that many patients treated with TKIs show fluctuations in BCR-ABL ratio yet are able to achieve durable remissions. To investigate these fluctuations, we construct a mathematical model that integrates CML with a patient's autologous immune response to the disease. In our model, we define an immune window, which is an intermediate range of leukemic concentrations that lead to an effective immune response against CML. While small leukemic concentrations provide insufficient stimulus, large leukemic concentrations actively suppress a patient's immune system, thus limiting it's ability to respond. Our patient data and modeling results suggest that at diagnosis, a patient's high leukemic concentration is able to suppress their immune system. TKI therapy drives the leukemic population into the immune window, allowing the patient's immune cells to expand and eventually mount an efficient response against the residual CML. This response drives the leukemic population below the immune window, causing the immune population to contract and allowing the leukemia to partially recover. The leukemia eventually reenters the immune window, thus stimulating a sequence of weaker immune responses as the two populations approach equilibrium. We hypothesize that a patient's autologous immune response to CML may explain the fluctuations in BCR-ABL ratio that are regularly seen during TKI therapy. These fluctuations may serve as a signature of a patient's individual immune response to CML. By applying our modeling framework to patient data, we are able to construct an immune profile that can then be used to propose patient-specific combination therapies aimed at further reducing a patient's leukemic burden. Our characterization of a patient's anti-leukemia immune response may be especially valuable in the study of drug resistance, treatment cessation, and combination therapy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study was to evaluate the ability of the BANA Test to detect different levels of Porphyromonas gingivalis, Treponema denticola and Tannerella forsythia or their combinations in subgingival samples at the initial diagnosis and after periodontal therapy. Periodontal sites with probing depths between 5-7 mm and clinical attachment level between 5-10 mm, from 53 subjects with chronic periodontitis, were sampled in four periods: initial diagnosis (T0), immediately (T1), 45 (T2) and 60 days (T3) after scaling and root planing. BANA Test and Checkerboard DNA-DNA hybridization identified red complex species in the subgingival biofilm. In all experimental periods, the highest frequencies of score 2 (Checkerboard DNA-DNA hybridization) for P. gingivalis, T. denticola and T. forsythia were observed when strong enzymatic activity (BANA) was present (p < 0.01). The best agreement was observed at initial diagnosis. The BANA Test sensitivity was 95.54% (T0), 65.18% (T1), 65.22% (T2) and 50.26% (T3). The specificity values were 12.24% (T0), 57.38% (T1), 46.27% (T2) and 53.48% (T3). The BANA Test is more effective for the detection of red complex pathogens when the bacterial levels are high, i.e. in the initial diagnosis of chronic periodontitis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose To test the association between night work and work ability, and verify whether the type of contractual employment has any inXuence over this association. Methods Permanent workers (N = 642) and workers with precarious jobs (temporary contract or outsourced; N = 552) were interviewed and Wlled out questionnaires concerning work hours and work ability index. They were classiWed into: never worked at night, ex-night workers, currently working up to Wve nights, and currently working at least six nights/2-week span. Results After adjusting for socio-demography and work variables, current night work was signiWcantly associated with inadequate WAI (vs. day work with no experience in night work) only for precarious workers (OR 2.00, CI 1.01- 3.95 and OR 1.85, CI 1.09-3.13 for those working up to Wve nights and those working at least six nights in 2 weeks, respectively). Conclusions Unequal opportunities at work and little experience in night work among precarious workers may explain their higher susceptibility to night work

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Microarray techniques have become an important tool to the investigation of genetic relationships and the assignment of different phenotypes. Since microarrays are still very expensive, most of the experiments are performed with small samples. This paper introduces a method to quantify dependency between data series composed of few sample points. The method is used to construct gene co-expression subnetworks of highly significant edges. Results: The results shown here are for an adapted subset of a Saccharomyces cerevisiae gene expression data set with low temporal resolution and poor statistics. The method reveals common transcription factors with a high confidence level and allows the construction of subnetworks with high biological relevance that reveals characteristic features of the processes driving the organism adaptations to specific environmental conditions. Conclusion: Our method allows a reliable and sophisticated analysis of microarray data even under severe constraints. The utilization of systems biology improves the biologists ability to elucidate the mechanisms underlying celular processes and to formulate new hypotheses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: High-density tiling arrays and new sequencing technologies are generating rapidly increasing volumes of transcriptome and protein-DNA interaction data. Visualization and exploration of this data is critical to understanding the regulatory logic encoded in the genome by which the cell dynamically affects its physiology and interacts with its environment. Results: The Gaggle Genome Browser is a cross-platform desktop program for interactively visualizing high-throughput data in the context of the genome. Important features include dynamic panning and zooming, keyword search and open interoperability through the Gaggle framework. Users may bookmark locations on the genome with descriptive annotations and share these bookmarks with other users. The program handles large sets of user-generated data using an in-process database and leverages the facilities of SQL and the R environment for importing and manipulating data. A key aspect of the Gaggle Genome Browser is interoperability. By connecting to the Gaggle framework, the genome browser joins a suite of interconnected bioinformatics tools for analysis and visualization with connectivity to major public repositories of sequences, interactions and pathways. To this flexible environment for exploring and combining data, the Gaggle Genome Browser adds the ability to visualize diverse types of data in relation to its coordinates on the genome. Conclusions: Genomic coordinates function as a common key by which disparate biological data types can be related to one another. In the Gaggle Genome Browser, heterogeneous data are joined by their location on the genome to create information-rich visualizations yielding insight into genome organization, transcription and its regulation and, ultimately, a better understanding of the mechanisms that enable the cell to dynamically respond to its environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective To examine the ability of the criteria proposed by the WHO to identify pneumonia among cases presenting with wheezing and the extent to which adding fever to the criteria alters their performance. Design Prospective classification of 390 children aged 2-59 months with lower respiratory tract disease into five diagnostic categories, including pneumonia. WHO criteria for the identification of pneumonia and a set of such criteria modified by adding fever were compared with radio-graphically diagnosed pneumonia as the gold standard. Results The sensitivity of the WHO criteria was 94% for children aged <24 months and 62% for those aged >= 24 months. The corresponding specificities were 20% and 16%. Adding fever to the WHO criteria improved specificity substantially (to 44% and 50%, respectively). The specificity of the WHO criteria was poor for children with wheezing (12%). Adding fever improved this substantially (to 42%). The addition of fever to the criteria apparently reduced their sensitivity only marginally (to 92% and 57%, respectively, in the two age groups). Conclusion The authors' results reaffirm that the current WHO criteria can detect pneumonia with high sensitivity, particularly among younger children. They present evidence that the ability of these criteria to distinguish between children with pneumonia and those with wheezing diseases might be greatly enhanced by the addition of fever.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study proposes a simplified mathematical model to describe the processes occurring in an anaerobic sequencing batch biofilm reactor (ASBBR) treating lipid-rich wastewater. The reactor, subjected to rising organic loading rates, contained biomass immobilized cubic polyurethane foam matrices, and was operated at 32 degrees C +/- 2 degrees C, using 24-h batch cycles. In the adaptation period, the reactor was fed with synthetic substrate for 46 days and was operated without agitation. Whereas agitation was raised to 500 rpm, the organic loading rate (OLR) rose from 0.3 g chemical oxygen demand (COD) . L(-1) . day(-1) to 1.2 g COD . L(-1) . day(-1). The ASBBR was fed fat-rich wastewater (dairy wastewater), in an operation period lasting for 116 days, during which four operational conditions (OCs) were tested: 1.1 +/- 0.2 g COD . L(-1) . day(-1) (OC1), 4.5 +/- 0.4 g COD . L(-1) . day(-1) (OC2), 8.0 +/- 0.8 g COD . L(-1) . day(-1) (OC3), and 12.1 +/- 2.4 g COD . L(-1) . day(-1) (OC4). The bicarbonate alkalinity (BA)/COD supplementation ratio was 1:1 at OC1, 1:2 at OC2, and 1:3 at OC3 and OC4. Total COD removal efficiencies were higher than 90%, with a constant production of bicarbonate alkalinity, in all OCs tested. After the process reached stability, temporal profiles of substrate consumption were obtained. Based on these experimental data a simplified first-order model was fit, making possible the inference of kinetic parameters. A simplified mathematical model correlating soluble COD with volatile fatty acids (VFA) was also proposed, and through it the consumption rates of intermediate products as propionic and acetic acid were inferred. Results showed that the microbial consortium worked properly and high efficiencies were obtained, even with high initial substrate concentrations, which led to the accumulation of intermediate metabolites and caused low specific consumption rates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The crystallisation behaviour for alloys in the Al-rich corner in the Al-La-Ni system is reported in this paper Alloys were selected based on the topological instability criterion (lambda criterion) calculated from the alloy composition and metallic radii of the alloying elements and aluminum Amorphous ribbons were produced by melt-spinning and the crystallisation reactions were analysed by X-ray diffraction and calorimetry The results showed that increasing the values of lambda from 0.072 to 0.16 resulted in the following changes in the crystallisation behaviour, as predicted by the lambda criterion (a) nanocrystallisation of alpha-Al for the alloy composition corresponding to lambda = 0 072 and (b) detection of the glass transition temperature, T(g), for the alloys with composition close to lambda approximate to 0.1 line. For compositions corresponding to both ends of the lambda approximate to 0 1 line (near the binaries lines) T(g) could be detected only in the ""intermediary"" central region, and the alloy we produced in this region was considered the best glass former for the Al-rich corner Also, except for the alloys with the highest NI content, crystallisation proceeded by two distinct exothermic peaks which are typical of nanocrystallisation transformation. These behaviours are discussed in terms of compositional (lambda parameter) and topological aspects to account for cluster formation in the amorphous phase. Crown Copyright (C) 2009 Published by Elsevier B V All rights reserved

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work is related to the so-called non-conventional finite element formulations. Essentially, a methodology for the enrichment of the initial approximation which is typical of the meshless methods and based on the clouds concept is introduced in the hybrid-Trefftz formulation for plane elasticity. The formulation presented allows for the approximation and direct enrichment of two independent fields: stresses in the domains and displacements on the boundaries of the elements. Defined by a set of elements and interior boundaries sharing a common node, the cloud notion is employed to select the enrichment support for the approximation fields. The numerical analysis performed reveals an excellent performance of the resulting formulation, characterized by the good approximation ability and a reduced computational effort. Copyright (C) 2009 John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A quantitative correlation between the glass forming ability and the electronic parameters of metallic alloys is presented. It is found that the critical cooling rate for glass formation (R(c)) correlates well with the average work function difference (Delta phi) and the average electron density difference (Delta n(ws)(1/3)) among the constituent elements of the investigated alloys. A correlation coefficient (R(2)) of 0.77 was found for 68 alloys in 30 metallic systems, which is better than the previous proposed correlation between the glass forming ability and the average Pauling electronegativity difference.