866 resultados para Cognitive Linguistics. Situation Models. Mental Simulation. Frames and Schemes
Resumo:
Many of the next generation of global climate models will include aerosol schemes which explicitly simulate the microphysical processes that determine the particle size distribution. These models enable aerosol optical properties and cloud condensation nuclei (CCN) concentrations to be determined by fundamental aerosol processes, which should lead to a more physically based simulation of aerosol direct and indirect radiative forcings. This study examines the global variation in particle size distribution simulated by 12 global aerosol microphysics models to quantify model diversity and to identify any common biases against observations. Evaluation against size distribution measurements from a new European network of aerosol supersites shows that the mean model agrees quite well with the observations at many sites on the annual mean, but there are some seasonal biases common to many sites. In particular, at many of these European sites, the accumulation mode number concentration is biased low during winter and Aitken mode concentrations tend to be overestimated in winter and underestimated in summer. At high northern latitudes, the models strongly underpredict Aitken and accumulation particle concentrations compared to the measurements, consistent with previous studies that have highlighted the poor performance of global aerosol models in the Arctic. In the marine boundary layer, the models capture the observed meridional variation in the size distribution, which is dominated by the Aitken mode at high latitudes, with an increasing concentration of accumulation particles with decreasing latitude. Considering vertical profiles, the models reproduce the observed peak in total particle concentrations in the upper troposphere due to new particle formation, although modelled peak concentrations tend to be biased high over Europe. Overall, the multi-model-mean data set simulates the global variation of the particle size distribution with a good degree of skill, suggesting that most of the individual global aerosol microphysics models are performing well, although the large model diversity indicates that some models are in poor agreement with the observations. Further work is required to better constrain size-resolved primary and secondary particle number sources, and an improved understanding of nucleation and growth (e.g. the role of nitrate and secondary organics) will improve the fidelity of simulated particle size distributions.
Resumo:
Continuous casting is a casting process that produces steel slabs in a continuous manner with steel being poured at the top of the caster and a steel strand emerging from the mould below. Molten steel is transferred from the AOD converter to the caster using a ladle. The ladle is designed to be strong and insulated. Complete insulation is never achieved. Some of the heat is lost to the refractories by convection and conduction. Heat losses by radiation also occur. It is important to know the temperature of the melt during the process. For this reason, an online model was previously developed to simulate the steel and ladle wall temperatures during the ladle cycle. The model was developed as an ODE based model using grey box modeling technique. The model’s performance was acceptable and needed to be presented in a user friendly way. The aim of this thesis work was basically to design a GUI that presents steel and ladle wall temperatures calculated by the model and also allow the user to make adjustments to the model. This thesis work also discusses the sensitivity analysis of different parameters involved and their effects on different temperature estimations.
Resumo:
The work done in this thesis attempts to demonstrate the importance of using models that can predict and represent the mobility of our society. To answer the proposed challenges two models were examined, the first corresponds to macro simulation with the intention of finding a solution to the frequency of the bus company Horários do Funchal, responsible for transport in the city of Funchal, and some surrounding areas. Where based on a simplified model of the city it was possible to increase the frequency of journeys getting an overall reduction in costs. The second model concerns the micro simulation of Avenida do Mar, where currently is being built a new roundabout (Praça da Autonomia), which connects with this avenue. Therefore it was proposed to study the impact on local traffic, and the implementation of new traffic lights for this purpose. Four possible situations in which was seen the possibility of increasing the number of lanes on the roundabout or the insertion of a bus lane were created. The results showed that having a roundabout with three lanes running is the best option because the waiting queues are minimal, and at environmental level this model will project fewer pollutants. Thus, this thesis presents two possible methods of urban planning. Transport modelling is an area that is under constant development, the global goal is to encourage more and more the use of these models, and as such it is important to have more people to devote themselves to studying new ways of addressing current problems, so that we can have more accurate models and increasing their credibility.
Resumo:
The objective of this study was to evaluate the use of probit and logit link functions for the genetic evaluation of early pregnancy using simulated data. The following simulation/analysis structures were constructed: logit/logit, logit/probit, probit/logit, and probit/probit. The percentages of precocious females were 5, 10, 15, 20, 25 and 30% and were adjusted based on a change in the mean of the latent variable. The parametric heritability (h²) was 0.40. Simulation and genetic evaluation were implemented in the R software. Heritability estimates (ĥ²) were compared with h² using the mean squared error. Pearson correlations between predicted and true breeding values and the percentage of coincidence between true and predicted ranking, considering the 10% of bulls with the highest breeding values (TOP10) were calculated. The mean ĥ² values were under- and overestimated for all percentages of precocious females when logit/probit and probit/logit models used. In addition, the mean squared errors of these models were high when compared with those obtained with the probit/probit and logit/logit models. Considering ĥ², probit/probit and logit/logit were also superior to logit/probit and probit/logit, providing values close to the parametric heritability. Logit/probit and probit/logit presented low Pearson correlations, whereas the correlations obtained with probit/probit and logit/logit ranged from moderate to high. With respect to the TOP10 bulls, logit/probit and probit/logit presented much lower percentages than probit/probit and logit/logit. The genetic parameter estimates and predictions of breeding values of the animals obtained with the logit/logit and probit/probit models were similar. In contrast, the results obtained with probit/logit and logit/probit were not satisfactory. There is need to compare the estimation and prediction ability of logit and probit link functions.
Resumo:
Our research has as goal to describe and analyze the main processes related to the activation of conceptual domains underlying the comprehension in the discourse pattern cartoons by the students of third grades of high school, at Professor Antonio Basílio Filho School. Theoretically, we are grounded on assumptions of Conceptual Linguistics, whose interest analyzes our cognitive apparatus in correlation with our socio-cultural and bodies experiences. We intend to check how is the process of meaning construction and integration of various cognitive domains that are activated during the reading activity. That s why, we take the concept of cognitive domains as equivalent to the structures that are stored in our memory from our sociocultural and corporeal experiences and they are stabilized, respectively, through the frames and schemas. The activation of these conceptual domains, as evidenced by our data, supports the assumption that previous knowledge from our inclusion in specific sociocultural contexts, concurrently with the functioning of our sensory-motor system are essential during the construction activity direction. With this research, we still intend to present a proposal confront the expectations of responses produced by students from the activation of frames and schemas with our predictions
Resumo:
Depression is a highly prevalent illness among institutionalized aged and assumes peculiar characteristics such as the risk for progressing to dementia. The aims of this study was to assess the cognitive functions of institutionalized elderly with clinical diagnosis of depression and compare the severity of depressive symptoms with cognitive performance. From 120 residents at a nursing home in Rio Claro, Brazil, we study 23 individuals (mean age: 74.3 years; mean schooling: 4.0 years) with diagnosis of depression. At first, a clinical diagnosis of depression and measurement of its symptoms using the Geriatric Depression Scale were performed. The patient then underwent a neuropsychological assessment based on the following tests: Mini-Mental Examination, Verbal Fluency, Visual Perception, Immediate Memory, Recent Memory, Recognition, Clock Drawing Test. The patients were divided into two groups: those with less severe depression symptoms (Group 1: N=9) and more severe symptoms (Group 2: N=14). The significant difference between symptom severity of the two groups was p=0.0001. Patients with more severe symptoms revealed a slightly inferior cognitive performance in most of the tests when compared to those with less severe symptoms (p>0.05). In relation to Verbal Fluency, patients with more severe depression symptoms presented a significantly inferior cognitive performance when compared to those with less severe symptoms (p=0.0082). Verbal Fluency revealed to be a more sensitive test for measuring early cognitive alterations in institutionalized aged with depression, and appears to be a useful resource in monitoring the cognitive functions of patients faced with the risk of dementia. © Copyright Moreira Jr. Editora.
Resumo:
Background: program for phonological remediation in developmental dyslexia. Aim: to verify the efficacy of a program for phonological remediation in students with developmental dyslexia. Specific goals of this study involved the comparison of the linguistic-cognitive performance of students with developmental dyslexia with that of students considered good readers; to compare the results obtained in pre and post-testing situations of students with dyslexia who were and were not submitted to the program; and to compare the results obtained with the phonological remediation program in students with developmental dyslexia to those obtained in good readers. Method: participants of this study were 24 students who were divided as follows: Group I (GI) was divided in two other groups - Gle with 6 students with developmental dyslexia who were submitted to the program; and Glc with 6 students with developmental dyslexia who were not submitted to the program; Group II (GII) was also divided in two other groups - GIIe with 6 good readers who were submitted to the program, and GIIc with 6 good readers who were not submitted to the program. The phonological remediation program (Gonzalez & Rosquete, 2002) was developed in three stages: pre-testing, training and post-testing. Results: results indicate that GI presented a lower performance in phonological skills, reading and writing when compared to GII in the pre-testing situation. However, GIe presented a similar performance to that of GII in the post-testing situation, indicating the effectiveness of the phonological remediation program in students with developmental dyslexia. Conclusion: this study made evident the effectiveness of the phonological remediation program in students with developmental dyslexia.
Resumo:
An extension of some standard likelihood based procedures to heteroscedastic nonlinear regression models under scale mixtures of skew-normal (SMSN) distributions is developed. This novel class of models provides a useful generalization of the heteroscedastic symmetrical nonlinear regression models (Cysneiros et al., 2010), since the random term distributions cover both symmetric as well as asymmetric and heavy-tailed distributions such as skew-t, skew-slash, skew-contaminated normal, among others. A simple EM-type algorithm for iteratively computing maximum likelihood estimates of the parameters is presented and the observed information matrix is derived analytically. In order to examine the performance of the proposed methods, some simulation studies are presented to show the robust aspect of this flexible class against outlying and influential observations and that the maximum likelihood estimates based on the EM-type algorithm do provide good asymptotic properties. Furthermore, local influence measures and the one-step approximations of the estimates in the case-deletion model are obtained. Finally, an illustration of the methodology is given considering a data set previously analyzed under the homoscedastic skew-t nonlinear regression model. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Background The paucity of studies regarding cognitive function in patients with chronic pain, and growing evidence regarding the cognitive effects of pain and opioids on cognitive function prompted us to assess cognition via neuropsychological measurement in patients with chronic non-cancer pain treated with opioids. Methods In this cross-sectional study, 49 patients were assessed by Continuous Reaction Time, Finger Tapping, Digit Span, Trail Making Test-B and Mini-mental State Examination tests. Linear regressions were applied. Results Patients scored poorly in the Trail Making Test-B (mean?=?107.6?s, SD?=?61.0, cut-off?=?91?s); and adequately on all other tests. Several associations among independent variables and cognitive tests were observed. In the multiple regression analyses, the variables associated with statistically significant poor cognitive performance were female sex, higher age, lower annual income, lower schooling, anxiety, depression, tiredness, lower opioid dose, and more than 5?h of sleep the night before assessment (P?<?0.05). Conclusions Patients with chronic pain may have cognitive dysfunction related to some reversible factors, which can be optimized by therapeutic interventions.
Resumo:
Objectives: To assess the relationship between the CHS frailty criteria (Fried et al., 2001) and cognitive performance. Design: Cross sectional and population-based. Setting: Ermelino Matarazzo, a poor sub district of the city of Sao Paulo, Brazil. Participants: 384 community dwelling older adults, 65 and older. Measurements: Assessment of the CHS frailty criteria, the Brief Cognitive Screening Battery (memorization of 10 black and white pictures, verbal fluency animal category, and the Clock Drawing Test) and the Mini-Mental State Examination (MMSE). Results: Frail older adults performed significantly lower than non-frail and pre frail elderly in most cognitive variables. Grip strength and age were associated to MMSE performance, age was associated to delayed memory recall, gait speed was associated to verbal fluency and CDT performance, and education was associated to CDT performance. Conclusion: Being frail may be associated with cognitive decline, thus, gerontological assessments and interventions should consider that these forms of vulnerability may occur simultaneously.
Resumo:
Background: Frailty in older adults is a multifactorial syndrome defined by low metabolic reserve, less resistance to stressors, and difficulty in maintaining organic homeostasis due to cumulative decline of multiple physiological systems. The relationship between frailty and cognition remains unclear and studies about Mini-Mental State Examination (MMSE) performance and frailty are scarce. The objective was to examine the association between frailty and cognitive functioning as assessed by the MMSE and its subdomains. Methods: A cross-sectional population-based study (FIBRA) was carried out in Ermelino Matarazzo, a poor subdistrict of the city of Sao Paulo, Brazil. Participants were 384 community dwelling older adults, 65 years and older who completed the MMSE and a protocol to assess frailty criteria as described in the Cardiovascular Health Study (CHS). Results: Frail older adults had significantly worse performance on the MMSE (p < 0.001 for total score). Linear regression analyses showed that the MMSE total score was influenced by age (p < 0.001), education (p < 0.001), family income (p < 0.001), and frailty status (p < 0.036). Being frail was associated more significantly with worse scores in Time Orientation (p < 0.004) and Immediate Memory (p < 0.001). Conclusions: Our data suggest that being frail is associated with worse cognitive performance, as assessed by the MMSE. It is recommended that the assessment of frail older adults should include the investigation of their cognitive status.
Computer simulation of ordering and dynamics in liquid crystals in the bulk and close to the surface
Resumo:
The aim of this PhD thesis is to investigate the orientational and dynamical properties of liquid crystalline systems, at molecular level and using atomistic computer simulations, to reach a better understanding of material behavior from a microscopic point view. In perspective this should allow to clarify the relation between the micro and macroscopic properties with the objective of predicting or confirming experimental results on these systems. In this context, we developed four different lines of work in the thesis. The first one concerns the orientational order and alignment mechanism of rigid solutes of small dimensions dissolved in a nematic phase formed by the 4-pentyl,4 cyanobiphenyl (5CB) nematic liquid crystal. The orientational distribution of solutes have been obtained with Molecular Dynamics Simulation (MD) and have been compared with experimental data reported in literature. we have also verified the agreement between order parameters and dipolar coupling values measured in NMR experiments. The MD determined effective orientational potentials have been compared with the predictions of MaierSaupe and Surface tensor models. The second line concerns the development of a correct parametrization able to reproduce the phase transition properties of a prototype of the oligothiophene semiconductor family: sexithiophene (T6). T6 forms two crystalline polymorphs largely studied, and possesses liquid crystalline phases still not well characterized, From simulations we detected a phase transition from crystal to liquid crystal at about 580 K, in agreement with available experiments, and in particular we found two LC phases, smectic and nematic. The crystalsmectic transition is associated to a relevant density variation and to strong conformational changes of T6, namely the molecules in the liquid crystal phase easily assume a bent shape, deviating from the planar structure typical of the crystal. The third line explores a new approach for calculating the viscosity in a nematic through a virtual exper- iment resembling the classical falling sphere experiment. The falling sphere is replaced by an hydrogenated silicon nanoparticle of spherical shape suspended in 5CB, and gravity effects are replaced by a constant force applied to the nanoparticle in a selected direction. Once the nanoparticle reaches a constant velocity, the viscosity of the medium can be evaluated using Stokes' law. With this method we successfully reproduced experimental viscosities and viscosity anisotropy for the solvent 5CB. The last line deals with the study of order induction on nematic molecules by an hydrogenated silicon surface. Gaining predicting power for the anchoring behavior of liquid crystals at surfaces will be a very desirable capability, as many properties related to devices depend on molecular organization close to surfaces. Here we studied, by means of atomistic MD simulations, the flat interface between an hydrogenated (001) silicon surface in contact with a sample of 5CB molecules. We found a planar anchoring of the first layers of 5CB where surface interactions are dominating with respect to the mesogen intermolecular interactions. We also analyzed the interface 5CBvacuum, finding a homeotropic orientation of the nematic at this interface.
Resumo:
The hierarchical organisation of biological systems plays a crucial role in the pattern formation of gene expression resulting from the morphogenetic processes, where autonomous internal dynamics of cells, as well as cell-to-cell interactions through membranes, are responsible for the emergent peculiar structures of the individual phenotype. Being able to reproduce the systems dynamics at different levels of such a hierarchy might be very useful for studying such a complex phenomenon of self-organisation. The idea is to model the phenomenon in terms of a large and dynamic network of compartments, where the interplay between inter-compartment and intra-compartment events determines the emergent behaviour resulting in the formation of spatial patterns. According to these premises the thesis proposes a review of the different approaches already developed in modelling developmental biology problems, as well as the main models and infrastructures available in literature for modelling biological systems, analysing their capabilities in tackling multi-compartment / multi-level models. The thesis then introduces a practical framework, MS-BioNET, for modelling and simulating these scenarios exploiting the potential of multi-level dynamics. This is based on (i) a computational model featuring networks of compartments and an enhanced model of chemical reaction addressing molecule transfer, (ii) a logic-oriented language to flexibly specify complex simulation scenarios, and (iii) a simulation engine based on the many-species/many-channels optimised version of Gillespie’s direct method. The thesis finally proposes the adoption of the agent-based model as an approach capable of capture multi-level dynamics. To overcome the problem of parameter tuning in the model, the simulators are supplied with a module for parameter optimisation. The task is defined as an optimisation problem over the parameter space in which the objective function to be minimised is the distance between the output of the simulator and a target one. The problem is tackled with a metaheuristic algorithm. As an example of application of the MS-BioNET framework and of the agent-based model, a model of the first stages of Drosophila Melanogaster development is realised. The model goal is to generate the early spatial pattern of gap gene expression. The correctness of the models is shown comparing the simulation results with real data of gene expression with spatial and temporal resolution, acquired in free on-line sources.
Resumo:
This thesis studies molecular dynamics simulations on two levels of resolution: the detailed level of atomistic simulations, where the motion of explicit atoms in a many-particle system is considered, and the coarse-grained level, where the motion of superatoms composed of up to 10 atoms is modeled. While atomistic models are capable of describing material specific effects on small scales, the time and length scales they can cover are limited due to their computational costs. Polymer systems are typically characterized by effects on a broad range of length and time scales. Therefore it is often impossible to atomistically simulate processes, which determine macroscopic properties in polymer systems. Coarse-grained (CG) simulations extend the range of accessible time and length scales by three to four orders of magnitude. However, no standardized coarse-graining procedure has been established yet. Following the ideas of structure-based coarse-graining, a coarse-grained model for polystyrene is presented. Structure-based methods parameterize CG models to reproduce static properties of atomistic melts such as radial distribution functions between superatoms or other probability distributions for coarse-grained degrees of freedom. Two enhancements of the coarse-graining methodology are suggested. Correlations between local degrees of freedom are implicitly taken into account by additional potentials acting between neighboring superatoms in the polymer chain. This improves the reproduction of local chain conformations and allows the study of different tacticities of polystyrene. It also gives better control of the chain stiffness, which agrees perfectly with the atomistic model, and leads to a reproduction of experimental results for overall chain dimensions, such as the characteristic ratio, for all different tacticities. The second new aspect is the computationally cheap development of nonbonded CG potentials based on the sampling of pairs of oligomers in vacuum. Static properties of polymer melts are obtained as predictions of the CG model in contrast to other structure-based CG models, which are iteratively refined to reproduce reference melt structures. The dynamics of simulations at the two levels of resolution are compared. The time scales of dynamical processes in atomistic and coarse-grained simulations can be connected by a time scaling factor, which depends on several specific system properties as molecular weight, density, temperature, and other components in mixtures. In this thesis the influence of molecular weight in systems of oligomers and the situation in two-component mixtures is studied. For a system of small additives in a melt of long polymer chains the temperature dependence of the additive diffusion is predicted and compared to experiments.
Resumo:
Globalization has increased the pressure on organizations and companies to operate in the most efficient and economic way. This tendency promotes that companies concentrate more and more on their core businesses, outsource less profitable departments and services to reduce costs. By contrast to earlier times, companies are highly specialized and have a low real net output ratio. For being able to provide the consumers with the right products, those companies have to collaborate with other suppliers and form large supply chains. An effect of large supply chains is the deficiency of high stocks and stockholding costs. This fact has lead to the rapid spread of Just-in-Time logistic concepts aimed minimizing stock by simultaneous high availability of products. Those concurring goals, minimizing stock by simultaneous high product availability, claim for high availability of the production systems in the way that an incoming order can immediately processed. Besides of design aspects and the quality of the production system, maintenance has a strong impact on production system availability. In the last decades, there has been many attempts to create maintenance models for availability optimization. Most of them concentrated on the availability aspect only without incorporating further aspects as logistics and profitability of the overall system. However, production system operator’s main intention is to optimize the profitability of the production system and not the availability of the production system. Thus, classic models, limited to represent and optimize maintenance strategies under the light of availability, fail. A novel approach, incorporating all financial impacting processes of and around a production system, is needed. The proposed model is subdivided into three parts, maintenance module, production module and connection module. This subdivision provides easy maintainability and simple extendability. Within those modules, all aspect of production process are modeled. Main part of the work lies in the extended maintenance and failure module that offers a representation of different maintenance strategies but also incorporates the effect of over-maintaining and failed maintenance (maintenance induced failures). Order release and seizing of the production system are modeled in the production part. Due to computational power limitation, it was not possible to run the simulation and the optimization with the fully developed production model. Thus, the production model was reduced to a black-box without higher degree of details.