906 resultados para PTSD, bombing, cognitive models, community, survey
Resumo:
BACKGROUND Preparing for potentially threatening events in the future is essential for survival. Anticipating the future to be unpleasant is also a cognitive key feature of depression. We hypothesized that 'pessimism'-related emotion processing would characterize brain activity in major depression.MethodDuring functional magnetic resonance imaging, depressed patients and a healthy control group were cued to expect and then perceive pictures of known emotional valences--pleasant, unpleasant and neutral--and stimuli of unknown valence that could have been either pleasant or unpleasant. Brain activation associated with the 'unknown' expectation was compared with the 'known' expectation conditions. RESULTS While anticipating pictures of unknown valence, activation patterns in depressed patients within the medial and dorsolateral prefrontal areas, inferior frontal gyrus, insula and medial thalamus were similar to activations associated with expecting unpleasant pictures, but not with expecting positive pictures. The activity within a majority of these areas correlated with the depression scores. Differences between healthy and depressed persons were found particularly for medial and dorsolateral prefrontal and insular activations. CONCLUSIONS Brain activation in depression during expecting events of unknown emotional valence was comparable with activation while expecting certainly negative, but not positive events. This neurobiological finding is consistent with cognitive models supposing that depressed patients develop a 'pessimistic' attitude towards events with an unknown emotional meaning. Thereby, particularly the role of brain areas associated with the processing of cognitive and executive control and of the internal state is emphasized in contributing to major depression.
Resumo:
The investigator conducted an action-oriented investigation of pregnancy and birth among the women of Mesa los Hornos, an urban squatter slum in Mexico City. Three aims guided the project: (1) To obtain information for improving prenatal and maternity service utilization; (2) To examine the utility of rapid ethnographic and epidemiologic assessment methodologies; (3) To cultivate community involvement in health development.^ Viewing service utilization as a culturally-bound decision, the study included a qualitative phase to explore women's cognition of pregnancy and birth, their perceived needs during pregnancy, and their criteria of service acceptability. A probability-based community survey delineated parameters of service utilization and pregnancy health events, and probed reasons for decisions to use medical services, lay midwives, or other sources of prenatal and labor and delivery assistance. Qualitative survey of service providers at relevant clinics, hospitals, and practices contributed information on service availability and access, and on coordination among private, social security, and public assistance health service sectors. The ethnographic approach to exploring the rationale for use or non-use of services provided a necessary complement to conventional barrier-based assessment, to inform planning of culturally appropriate interventions.^ Information collection and interpretation was conducted under the aegis of an advisory committee of community residents and service agency representatives; the residents' committee formulated recommendations for action based on findings, and forwarded the mandate to governmental social and urban development offices. Recommendations were designed to inform and develop community participation in health care decision-making.^ Rapid research methods are powerful tools for achieving community-based empowerment toward investigation and resolution of local health problems. But while ethnography works well in synergy with quantitative assessment approaches to strengthen the validity and richness of short-term field work, the author strongly urges caution in application of Rapid Ethnographic Assessments. An ethnographic sensibility is essential to the research enterprise for the development of an active and cooperative community base, the design and use of quantitative instruments, the appropriate use of qualitative techniques, and the interpretation of culturally-oriented information. However, prescribed and standardized Rapid Ethnographic Assessment techniques are counter-productive if used as research short-cuts before locale- and subject-specific cultural understanding is achieved. ^
Resumo:
In the beginning of the 90s, ontology development was similar to an art: ontology developers did not have clear guidelines on how to build ontologies but only some design criteria to be followed. Work on principles, methods and methodologies, together with supporting technologies and languages, made ontology development become an engineering discipline, the so-called Ontology Engineering. Ontology Engineering refers to the set of activities that concern the ontology development process and the ontology life cycle, the methods and methodologies for building ontologies, and the tool suites and languages that support them. Thanks to the work done in the Ontology Engineering field, the development of ontologies within and between teams has increased and improved, as well as the possibility of reusing ontologies in other developments and in final applications. Currently, ontologies are widely used in (a) Knowledge Engineering, Artificial Intelligence and Computer Science, (b) applications related to knowledge management, natural language processing, e-commerce, intelligent information integration, information retrieval, database design and integration, bio-informatics, education, and (c) the Semantic Web, the Semantic Grid, and the Linked Data initiative. In this paper, we provide an overview of Ontology Engineering, mentioning the most outstanding and used methodologies, languages, and tools for building ontologies. In addition, we include some words on how all these elements can be used in the Linked Data initiative.
Resumo:
Se está produciendo en la geodesia un cambio de paradigma en la concepción de los modelos digitales del terreno, pasando de diseñar el modelo con el menor número de puntos posibles a hacerlo con cientos de miles o millones de puntos. Este cambio ha sido consecuencia de la introducción de nuevas tecnologías como el escáner láser, la interferometría radar y el tratamiento de imágenes. La rápida aceptación de estas nuevas tecnologías se debe principalmente a la gran velocidad en la toma de datos, a la accesibilidad por no precisar de prisma y al alto grado de detalle de los modelos. Los métodos topográficos clásicos se basan en medidas discretas de puntos que considerados en su conjunto forman un modelo; su precisión se deriva de la precisión en la toma singular de estos puntos. La tecnología láser escáner terrestre (TLS) supone una aproximación diferente para la generación del modelo del objeto observado. Las nubes de puntos, producto del escaneo con TLS, pasan a ser tratadas en su conjunto mediante análisis de áreas, de forma que ahora el modelo final no es el resultado de una agregación de puntos sino la de la mejor superficie que se adapta a las nubes de puntos. Al comparar precisiones en la captura de puntos singulares realizados con métodos taquimétricos y equipos TLS la inferioridad de estos últimos es clara; sin embargo es en el tratamiento de las nubes de puntos, con los métodos de análisis basados en áreas, se han obtenido precisiones aceptables y se ha podido considerar plenamente la incorporación de esta tecnología en estudios de deformaciones y movimientos de estructuras. Entre las aplicaciones del TLS destacan las de registro del patrimonio, registro de las fases en la construcción de plantas industriales y estructuras, atestados de accidentes y monitorización de movimientos del terreno y deformaciones de estructuras. En la auscultación de presas, comparado con la monitorización de puntos concretos dentro, en coronación o en el paramento de la presa, disponer de un modelo continuo del paramento aguas abajo de la presa abre la posibilidad de introducir los métodos de análisis de deformaciones de superficies y la creación de modelos de comportamiento que mejoren la comprensión y previsión de sus movimientos. No obstante, la aplicación de la tecnología TLS en la auscultación de presas debe considerarse como un método complementario a los existentes. Mientras que los péndulos y la reciente técnica basada en el sistema de posicionamiento global diferencial (DGPS) dan una información continua de los movimientos de determinados puntos de la presa, el TLS permite ver la evolución estacional y detectar posibles zonas problemáticas en todo el paramento. En este trabajo se analizan las características de la tecnología TLS y los parámetros que intervienen en la precisión final de los escaneos. Se constata la necesidad de utilizar equipos basados en la medida directa del tiempo de vuelo, también llamados pulsados, para distancias entre 100 m y 300 m Se estudia la aplicación del TLS a la modelización de estructuras y paramentos verticales. Se analizan los factores que influyen en la precisión final, como el registro de nubes, tipo de dianas y el efecto conjunto del ángulo y la distancia de escaneo. Finalmente, se hace una comparación de los movimientos dados por los péndulos directos de una presa con los obtenidos del análisis de las nubes de puntos correspondientes a varias campañas de escaneos de la misma presa. Se propone y valida el empleo de gráficos patrón para relacionar las variables precisión o exactitud con los factores distancia y ángulo de escaneo en el diseño de trabajos de campo. Se expone su aplicación en la preparación del trabajo de campo para la realización de una campaña de escaneos dirigida al control de movimientos de una presa y se realizan recomendaciones para la aplicación de la técnica TLS a grandes estructuras. Se ha elaborado el gráfico patrón de un equipo TLS concreto de alcance medio. Para ello se hicieron dos ensayos de campo en condiciones reales de trabajo, realizando escaneos en todo el rango de distancias y ángulos de escaneo del equipo. Se analizan dos métodos para obtener la precisión en la modelización de paramentos y la detección de movimientos de estos: el método del “plano de mejor ajuste” y el método de la “deformación simulada”. Por último, se presentan los resultados de la comparación de los movimientos estacionales de una presa arco-gravedad entre los registrados con los péndulos directos y los obtenidos a partir de los escaneos realizados con un TLS. Los resultados muestran diferencias de milímetros, siendo el mejor de ellos del orden de un milímetro. Se explica la metodología utilizada y se hacen consideraciones respecto a la densidad de puntos de las nubes y al tamaño de las mallas de triángulos. A shift of paradigm in the conception of the survey digital models is taking place in geodesy, moving from designing a model with the fewer possible number of points to models of hundreds of thousand or million points. This change has happened because of the introduction of new technologies like the laser scanner, the interferometry radar and the processing of images. The fast acceptance of these new technologies has been due mainly to the great speed getting the data, to the accessibility as reflectorless technique, and to the high degree of detail of the models. Classic survey methods are based on discreet measures of points that, considered them as a whole, form a model; the precision of the model is then derived from the precision measuring the single points. The terrestrial laser scanner (TLS) technology supposes a different approach to the model generation of the observed object. Point cloud, the result of a TLS scan, must be treated as a whole, by means of area-based analysis; so, the final model is not an aggregation of points but the one resulting from the best surface that fits with the point cloud. Comparing precisions between the one resulting from the capture of singular points made with tachometric measurement methods and with TLS equipment, the inferiority of this last one is clear; but it is in the treatment of the point clouds, using area-based analysis methods, when acceptable precisions have been obtained and it has been possible to consider the incorporation of this technology for monitoring structures deformations. Among TLS applications it have to be emphasized those of registry of the cultural heritage, stages registry during construction of industrial plants and structures, police statement of accidents and monitorization of land movements and structures deformations. Compared with the classical dam monitoring, approach based on the registry of a set of points, the fact having a continuous model of the downstream face allows the possibility of introducing deformation analysis methods and behavior models that would improve the understanding and forecast of dam movements. However, the application of TLS technology for dam monitoring must be considered like a complementary method with the existing ones. Pendulums and recently the differential global positioning system (DGPS) give a continuous information of the movements of certain points of the dam, whereas TLS allows following its seasonal evolution and to detect damaged zones of the dam. A review of the TLS technology characteristics and the factors affecting the final precision of the scanning data is done. It is stated the need of selecting TLS based on the direct time of flight method, also called pulsed, for scanning distances between 100m and 300m. Modelling of structures and vertical walls is studied. Factors that influence in the final precision, like the registry of point clouds, target types, and the combined effect of scanning distance and angle of incidence are analyzed. Finally, a comparison among the movements given by the direct pendulums of a dam and the ones obtained from the analysis of point clouds is done. A new approach to obtain a complete map-type plot of the precisions of TLS equipment based on the direct measurement of time of flight method at midrange distances is presented. Test were developed in field-like conditions, similar to dam monitoring and other civil engineering works. Taking advantage of graphic semiological techniques, a “distance - angle of incidence” map based was designed and evaluated for field-like conditions. A map-type plot was designed combining isolines with sized and grey scale points, proportional to the precision values they represent. Precisions under different field conditions were compared with specifications. For this purpose, point clouds were evaluated under two approaches: the standar "plane-of-best-fit" and the proposed "simulated deformation”, that showed improved performance. These results lead to a discussion and recommendations about optimal TLS operation in civil engineering works. Finally, results of the comparison of seasonal movements of an arc-gravity dam between the registered by the direct pendulums ant the obtained from the TLS scans, are shown. The results show differences of millimeters, being the best around one millimeter. The used methodology is explained and considerations with respect to the point cloud density and to the size of triangular meshes are done.
Resumo:
It is well known that higher parental socioeconomic status (SES) predicts better child reading outcomes, but little work has been done to unpack this finding. The main overall question addressed by this project was whether cognitive models of the two main reading outcomes, single word reading (SWR) and reading comprehension (RC), performed similarly across levels of parental SES. The current study predicted a differential relation between parental SES and both predictors and outcomes because of the known large relation between parental SES and child oral language development. Three questions examined the mediating effects of cognitive predictors on the relation between parental SES and reading outcomes, the moderating effects of SES on the developmental trajectories of reading outcomes, and the strength of the relationship between SES and the two reading outcomes. Participants were part of two large and comprehensive datasets: the cross-sectional Colorado Learning Disability Research Center (CLDRC; n=1554) sample, and the International Longitudinal Twin Study (ILTS; n=463 twin pairs) sample. In terms of cognitive predictors, the relation between SES and SWR was disproportionately mediated by two language skills, vocabulary (VOC) and phonological awareness (PA). For the RC models, both SWR and oral listening comprehension (OLC) did not disproportionally mediate the relation between RC and SES; however, full mediation was not exhibited. With regard to the trajectory of reading outcomes, SES moderated the starting values of SWR and RC, and the slopes of SWR development. When performance on the control measures of early reading skills (e.g., print knowledge, vocabulary, and decoding skills) was included the models, the moderating effects of SES were completely accounted for by these measures. In terms of outcomes, SES had a stronger relation to RC than to SWR, especially at later ages. These findings have implications for interventions aimed at improving reading outcomes in children from lower SES families.
Resumo:
Taking its inspiration from the ongoing debate on whether this time will be different for Greece and whether Syriza will deliver on its reform promises to the European partners, this Commentary expresses bemusement that the public debate on such an important issue as well as internal discussions among senior policy-makers frequently resort to ‘gut feelings’ or simple stereotypes. To counteract this tendency, the author presents a simple analytical framework that can be used to assess the likelihood that a government will deliver on its reform agenda. Its purpose is not to allow for a precise probabilistic calculation, but to enable better structuring of the knowledge we have. It emphasises that the change depends NOT only on the capacity of the state to design and deliver policies, but even more crucially on state autonomy from both illegitimate and legitimate interests and cognitive models used by policy-makers to make sense of the world.
Resumo:
A collection of miscellaneous pamphlets on religion.
Resumo:
We used magnetoencephalography (MEG) to map the spatiotemporal evolution of cortical activity for visual word recognition. We show that for five-letter words, activity in the left hemisphere (LH) fusiform gyrus expands systematically in both the posterior-anterior and medial-lateral directions over the course of the first 500 ms after stimulus presentation. Contrary to what would be expected from cognitive models and hemodynamic studies, the component of this activity that spatially coincides with the visual word form area (VWFA) is not active until around 200 ms post-stimulus, and critically, this activity is preceded by and co-active with activity in parts of the inferior frontal gyrus (IFG, BA44/6). The spread of activity in the VWFA for words does not appear in isolation but is co-active in parallel with spread of activity in anterior middle temporal gyrus (aMTG, BA 21 and 38), posterior middle temporal gyrus (pMTG, BA37/39), and IFG. © 2004 Elsevier Inc. All rights reserved.
Resumo:
The existence of different varieties of the acquired reading disorder termed "phonological dyslexia" is demonstrated in this thesis. The data are interpreted in terms of an information-processing model of normal reading which postulates autonomous routes for pronouncing lexical and non-lexical items and identifies a number of separable sub-processes within both lexical and non-lexical routes. A case study approach is used and case reports on ten patients who have particular difficulty in processing non-lexical stimuli following cerebral insult are presented, Chapters 1 and 2 describe the theoretical background to the investigation. Cognitive models of reading are examined in Chapter 1 and the theoretical status of the current taxonomy of the acquired dyslexias discussed in relation to the models. In Chapter 2 the symptoms associated with phonological dyslexia are discussed both in terms of the theoretical models and in terms of the cosistency with which they are reported to occur in clinical studies. Published cases of phonological dyslexia are reviewed. Chapter 3 describes the tests administered and the analysis of error responses. The majority of tests require reading aloud of single lexical or non-lexical items and investigate the effect of different variables on reading performance. Chapter 4 contains the case reports. The final chapter summarises the different patterns of reading behaviour observed. The theoretical model predicts the selective impairment of subsystems within the phonological route. The data provide evidence of such selective impairment. It is concluded that there are different varieties of phonological dyslexia corresponding to the different loci of impairment within the phonological route. It is also concluded that the data support the hypothesis that there are two lexical routes. A further subdivision of phonological dyslexia is made on the basis of selective impairment of the direct or lexical-semantic routes.
Resumo:
This thesis attempted to explain society's worldview of Santeria and its practice of animal sacrifice, and the breakdown between the federal and local government after a 1993 Supreme Court ruling affirming their right to engage in this sacred ritual. Santeria practitioners are harassed and prosecuted for exercising their right to practice animal sacrifice. The research was intended to present the cosmology of the Lukumi tradition, the intellectual framework explored, a review of Freedom of Religion and the case of Lukumi v. Hialeah, and finally the media's role in shaping the worldview of Santeria that have perpetuated this breakdown. The thesis consisted of 87 research items, a community survey, interviews, a Santeria divination, and review of case law, books,newspaper and online journals. These findings demonstrated that freedom of religion is not so free in the U.S., and exists only to the extent the media and municipal laws choose to allow.
Resumo:
Coral reefs are increasingly threatened by global and local anthropogenic stressors, such as rising seawater temperature and nutrient enrichment. These two stressors vary widely across the reef face and parsing out their influence on coral communities at reef system scales has been particularly challenging. Here, we investigate the influence of temperature and nutrients on coral community traits and life history strategies on lagoonal reefs across the Belize Mesoamerican Barrier Reef System (MBRS). A novel metric was developed using ultra-high-resolution sea surface temperatures (SST) to classify reefs as enduring low (lowTP), moderate (modTP), or extreme (extTP) temperature parameters over 10 years (2003 to 2012). Chlorophyll-a (chl a) records obtained for the same interval were employed as a proxy for bulk nutrients and these records were complemented with in situ measurements to "sea truth" nutrient content across the three reef types. Chl a concentrations were highest at extTP sites, medial at modTP sites and lowest at lowTP sites. Coral species richness, abundance, diversity, density, and percent cover were lower at extTP sites compared to lowTP and modTP sites, but these reef community traits did not differ between lowTP and modTP sites. Coral life history strategy analyses showed that extTP sites were dominated by hardy stress-tolerant and fast-growing weedy coral species, while lowTP and modTP sites consisted of competitive, generalist, weedy, and stress-tolerant coral species. These results suggest that differences in coral community traits and life history strategies between extTP and lowTP/modTP sites were driven primarily by temperature differences with differences in nutrients across site types playing a lesser role. Dominance of weedy and stress-tolerant genera at extTP sites suggests that corals utilizing these two life history strategies may be better suited to cope with warmer oceans and thus may warrant further protective status during this climate change interval.
Data associated with this project are archived here, including:
-SST data
-Satellite Chl a data
-Nutrient measurements
-Raw coral community survey data
For questions contact Justin Baumann (j.baumann3
Resumo:
Abstract
Continuous variable is one of the major data types collected by the survey organizations. It can be incomplete such that the data collectors need to fill in the missingness. Or, it can contain sensitive information which needs protection from re-identification. One of the approaches to protect continuous microdata is to sum them up according to different cells of features. In this thesis, I represents novel methods of multiple imputation (MI) that can be applied to impute missing values and synthesize confidential values for continuous and magnitude data.
The first method is for limiting the disclosure risk of the continuous microdata whose marginal sums are fixed. The motivation for developing such a method comes from the magnitude tables of non-negative integer values in economic surveys. I present approaches based on a mixture of Poisson distributions to describe the multivariate distribution so that the marginals of the synthetic data are guaranteed to sum to the original totals. At the same time, I present methods for assessing disclosure risks in releasing such synthetic magnitude microdata. The illustration on a survey of manufacturing establishments shows that the disclosure risks are low while the information loss is acceptable.
The second method is for releasing synthetic continuous micro data by a nonstandard MI method. Traditionally, MI fits a model on the confidential values and then generates multiple synthetic datasets from this model. Its disclosure risk tends to be high, especially when the original data contain extreme values. I present a nonstandard MI approach conditioned on the protective intervals. Its basic idea is to estimate the model parameters from these intervals rather than the confidential values. The encouraging results of simple simulation studies suggest the potential of this new approach in limiting the posterior disclosure risk.
The third method is for imputing missing values in continuous and categorical variables. It is extended from a hierarchically coupled mixture model with local dependence. However, the new method separates the variables into non-focused (e.g., almost-fully-observed) and focused (e.g., missing-a-lot) ones. The sub-model structure of focused variables is more complex than that of non-focused ones. At the same time, their cluster indicators are linked together by tensor factorization and the focused continuous variables depend locally on non-focused values. The model properties suggest that moving the strongly associated non-focused variables to the side of focused ones can help to improve estimation accuracy, which is examined by several simulation studies. And this method is applied to data from the American Community Survey.
Resumo:
One of the leading motivations behind the multilingual semantic web is to make resources accessible digitally in an online global multilingual context. Consequently, it is fundamental for knowledge bases to find a way to manage multilingualism and thus be equipped with those procedures for its conceptual modelling. In this context, the goal of this paper is to discuss how common-sense knowledge and cultural knowledge are modelled in a multilingual framework. More particularly, multilingualism and conceptual modelling are dealt with from the perspective of FunGramKB, a lexico-conceptual knowledge base for natural language understanding. This project argues for a clear division between the lexical and the conceptual dimensions of knowledge. Moreover, the conceptual layer is organized into three modules, which result from a strong commitment towards capturing semantic knowledge (Ontology), procedural knowledge (Cognicon) and episodic knowledge (Onomasticon). Cultural mismatches are discussed and formally represented at the three conceptual levels of FunGramKB.
Resumo:
This work presents a study about a the Baars-Franklin architecture, which defines a model of computational consciousness, and use it in a mobile robot navigation task. The insertion of mobile robots in dynamic environments carries a high complexity in navigation tasks, in order to deal with the constant environment changes, it is essential that the robot can adapt to this dynamism. The approach utilized in this work is to make the execution of these tasks closer to how human beings react to the same conditions by means of a model of computational consci-ousness. The LIDA architecture (Learning Intelligent Distribution Agent) is a cognitive system that seeks tomodel some of the human cognitive aspects, from low-level perceptions to decision making, as well as attention mechanism and episodic memory. In the present work, a computa-tional implementation of the LIDA architecture was evaluated by means of a case study, aiming to evaluate the capabilities of a cognitive approach to navigation of a mobile robot in dynamic and unknown environments, using experiments both with virtual environments (simulation) and a real robot in a realistic environment. This study concluded that it is possible to obtain benefits by using conscious cognitive models in mobile robot navigation tasks, presenting the positive and negative aspects of this approach.
Resumo:
Atopic eczema affects many adults and up to 20% of children,1 with health costs comparable to diabetes2 and asthma.3 One community survey of 1760 young children in the United Kingdom found that 84% had mild eczema, 14% moderate, and 2% severe eczema.4 Topical corticosteroids are a mainstay of treatment for inflammatory episodes.5 Most long established topical corticosteroids such as betamethasone valerate or hydrocortisone are applied at least twice daily, but three newer preparations (mometasone, fluticasone, and methylprednisolone) have been developed for once daily application. Here, I propose that established preparations need be applied only once daily.