967 resultados para explicit categorization
Resumo:
We calculate the effective diffusion coefficient in convective flows which are well described by one spatial mode. We use an expansion in the distance from onset and homogenization methods to obtain an explicit expression for the transport coefficient. We find that spatially periodic fluid flow enhances the molecular diffusion D by a term proportional to D-1. This enhancement should be easy to observe in experiments, since D is a small number.
Resumo:
Interactive Choice Aid (ICA) is a decision aid, introduced in this paper, that systematically assists consumers with online purchase decisions. ICA integrates aspects from prescriptive decision theory, insights from descriptive decision research, and practical considerations; thereby combining pre-existing best practices with novel features. Instead of imposing an objectively ideal but unnatural decision procedure on the user, ICA assists the natural process of human decision-making by providing explicit support for the execution of the user's decision strategies. The application contains an innovative feature for in-depth comparisons of alternatives through which users' importance ratings are elicited interactively and in a playful way. The usability and general acceptance of the choice aid was studied; results show that ICA is a promising contribution and provides insights that may further improve its usability.
Resumo:
A recent study of a pair of sympatric species of cichlids in Lake Apoyo in Nicaragua is viewed as providing probably one of the most convincing examples of sympatric speciation to date. Here, we describe and study a stochastic, individual-based, explicit genetic model tailored for this cichlid system. Our results show that relatively rapid (<20,000 generations) colonization of a new ecological niche and (sympatric or parapatric) speciation via local adaptation and divergence in habitat and mating preferences are theoretically plausible if: (i) the number of loci underlying the traits controlling local adaptation, and habitat and mating preferences is small; (ii) the strength of selection for local adaptation is intermediate; (iii) the carrying capacity of the population is intermediate; and (iv) the effects of the loci influencing nonrandom mating are strong. We discuss patterns and timescales of ecological speciation identified by our model, and we highlight important parameters and features that need to be studied empirically to provide information that can be used to improve the biological realism and power of mathematical models of ecological speciation.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.
Resumo:
Introduction: The development of novel therapies and the increasing number of trials testing management strategies for luminal Crohn's disease (CD) have not filled all the gaps in our knowledge. Thus, in clinical practice, many decisions for CD patients need to be taken without high quality evidence. For this reason, a multidisciplinary European expert panel followed the RAND method to develop explicit criteria for the management of individual patients with active, steroid-dependent (ST-D) and steroid-refractory (ST-R) CD. Methods: Twelve international experts convened in Geneva, Switzerland in December 2007, to rate explicit clinical scenarios, corresponding to real daily practice, on a 9-point scale according to the literature evidence and their own expertise. Median ratings were stratified into three categories: appropriate (7-9), uncertain (4-6) and inappropriate (1-3). Results: Overall, panelists rated 296 indications pertaining to mild-to-moderate, severe, ST-D, and ST-R CD. In anti-TNF naïve patients, budesonide and prednisone were found appropriate for mildmoderate CD, and infliximab (IFX) when those had previously failed or had not been tolerated. In patients with prior success with IFX, this drug with or without co-administration of a thiopurine analog was favored. Other anti-TNFs were appropriate in case of intolerance or resistance to IFX. High doses steroids, IFX or adalimumab were appropriate in severe active CD. Among 105 indications for ST-D or ST-R disease, the panel considered appropriate the thiopurine analogs, methotrexate, IFX, adalimumab and surgery for limited resection, depending on the outcome of prior therapies. Anti-TNFs were generally considered appropriate in ST-R. Conclusion: Steroids, including budesonide for mild-to-moderate CD, remain first-line therapies in active luminal CD. Anti-TNFs, in particular IFX with respect to the amount of available evidence, remain second-line for most indications. Thiopurine analogs are preferred to anti-TNFs when steroids are not appropriate, except when anti-TNFs were previously successful. These recommendations are available online (www.epact.ch). A prospective evaluation of these criteria in a large database in Switzerland in underway to validate these criteria.
Resumo:
RESUME Peu d'informations sont disponibles concernant la prévalence et les motifs de refus de la vaccination contre la grippe dans la population âgée. Le but de notre recherche était d'investiguer les vrais motifs de refus de la vaccination (c'est-à-dire pas uniquement les raisons de non-vaccination parfois indépendantes du patient lui- même) chez les personnes âgées. Tous les patients ambulatoires de plus de 65 ans consultant la Policlinique Médicale Universitaire (PMU) de Lausanne ou leur médecin traitant durant les périodes de vaccination contre la grippe 1999-2000 et 2000-2001 ont été inclus. Chaque patient recevait une information sur la grippe et ses complications, de même que sur la nécessité de la vaccination, son efficacité et ses effets seconda ires éventuels. En l'absence de contre-indication, la vaccination était proposée. En cas de refus, les motifs étaient investigués par une question ouverte. Sur 1398 sujets inclus, 148 (12%) ont refusé la vaccination. Les raisons principales de refus étaient la perception d'être en bonne santé (16%), de ne pas être susceptible à la grippe (15%) ou le fait de ne jamais avoir été vacciné contre la grippe dans le passé (15%). On retrouvait également la mauvaise expérience personnelle ou d'un proche lors d'une vaccination (15%) et l'impression d'inutilité du vaccin (10%). 17% des personnes interrogées ont donné des motifs autres et 12% n'ont pas explicité leur non-acceptation. Les refus de vaccination contre la grippe dans la population âgée sont essentiellement liés aux convictions intimes du patient quant à son état de santé et à sa susceptibilité à la grippe, de même qu'à l'efficacité supposée de la vaccination. La résistance au changement semble être un obstacle majeur à l'introduction de la vaccination chez les personnes de plus de 65 ans. SUMMARY More knowledge on the reasons for refusal of the influenza vaccine in elderly patients is essential to target groups for additional information, and hence improve coverage rate. The objective of the present study was to describe precisely the true motives for refusal. All patients aged over 64 who attended the Medical Outpatient Clinic, University of Lausanne, or their private practitioner's office during the 1999 and 2000 vaccination periods were included. Each patient was informed on influenza and its complications, as well as on the need for vaccination, its efficacy and adverse events. The vaccination was then proposed. In case of refusal, the reasons were investigated with an open question. Out of 1398 patients, 148 (12%) refused the vaccination. The main reasons for refusal were the perception of being in good health (16%), of not being susceptible to influenza (15%), of not having had the influenza vaccine in the past (15%), of having had a bad experience either personally or a relative (15%), and the uselessness of the vaccine (10%). Seventeen percent gave miscellaneous reasons and 12% no reason at all for refusal. Little epidemiological knowledge and resistance to change appear to be the major obstacles for wide acceptance of the vaccine by the elderly.
Resumo:
OBJECTIVE: To compare published guidelines concerning screening for gestational diabetes. STUDY DESIGN: Systematic search and comparative analysis of published guidelines. Appraisal of guidelines quality. Simulation analysis. RESULTS: Ten published guidelines proposed either universal screening (5), selective screening (3) or screening when clinically indicated (2). Variations of testing schedules and blood glucose thresholds were observed. The quality of the published guidelines was low, on average 22 (8-51) percentage points on the assessment scale. These differences would have led to large variations in the number of patients to be screened. CONCLUSIONS: Large variations between guidelines have been observed which would translate in large practice variations, if the guidelines were systematically applied. These variations are partially explained by the absence of definite evidence that universal or selective screening for gestational diabetes do more good than harm on infant and maternal health. The methodology of developing guidelines should be more evidence based, systematic and explicit.
Resumo:
The purpose of this article is to treat a currently much debated issue, the effects of age on second language learning. To do so, we contrast data collected by our research team from over one thousand seven hundred young and adult learners with four popular beliefs or generalizations, which, while deeply rooted in this society, are not always corroborated by our data.Two of these generalizations about Second Language Acquisition (languages spoken in the social context) seem to be widely accepted: a) older children, adolescents and adults are quicker and more efficient at the first stages of learning than are younger learners; b) in a natural context children with an early start are more liable to attain higher levels of proficiency. However, in the context of Foreign Language Acquisition, the context in which we collect the data, this second generalization is difficult to verify due to the low number of instructional hours (a maximum of some 800 hours) and the lower levels of language exposure time provided. The design of our research project has allowed us to study differences observed with respect to the age of onset (ranging from 2 to 18+), but in this article we focus on students who began English instruction at the age of 8 (LOGSE Educational System) and those who began at the age of 11 (EGB). We have collected data from both groups after a period of 200 (Time 1) and 416 instructional hours (Time 2), and we are currently collecting data after a period of 726 instructional hours (Time 3). We have designed and administered a variety of tests: tests on English production and reception, both oral and written, and within both academic and communicative oriented approaches, on the learners' L1 (Spanish and Catalan), as well as a questionnaire eliciting personal and sociolinguistic information. The questions we address and the relevant empirical evidence are as follows: 1. "For young children, learning languages is a game. They enjoy it more than adults."Our data demonstrate that the situation is not quite so. Firstly, both at the levels of Primary and Secondary education (ranging from 70.5% in 11-year-olds to 89% in 14-year-olds) students have a positive attitude towards learning English. Secondly, there is a difference between the two groups with respect to the factors they cite as responsible for their motivation to learn English: the younger students cite intrinsic factors, such as the games they play, the methodology used and the teacher, whereas the older students cite extrinsic factors, such as the role of their knowledge of English in the achievement of their future professional goals. 2 ."Young children have more resources to learn languages." Here our data suggest just the opposite. The ability to employ learning strategies (actions or steps used) increases with age. Older learners' strategies are more varied and cognitively more complex. In contrast, younger learners depend more on their interlocutor and external resources and therefore have a lower level of autonomy in their learning. 3. "Young children don't talk much but understand a lot"This third generalization does seem to be confirmed, at least to a certain extent, by our data in relation to the analysis of differences due to the age factor and productive use of the target language. As seen above, the comparably slower progress of the younger learners is confirmed. Our analysis of interpersonal receptive abilities demonstrates as well the advantage of the older learners. Nevertheless, with respect to passive receptive activities (for example, simple recognition of words or sentences) no great differences are observed. Statistical analyses suggest that in this test, in contrast to the others analyzed, the dominance of the subjects' L1s (reflecting a cognitive capacity that grows with age) has no significant influence on the learning process. 4. "The sooner they begin, the better their results will be in written language"This is not either completely confirmed in our research. First of all, we perceive that certain compensatory strategies disappear only with age, but not with the number of instructional hours. Secondly, given an identical number of instructional hours, the older subjects obtain better results. With respect to our analysis of data from subjects of the same age (12 years old) but with a different number of instructional hours (200 and 416 respectively, as they began at the ages of 11 and 8), we observe that those who began earlier excel only in the area of lexical fluency. In conclusion, the superior rate of older learners appears to be due to their higher level of cognitive development, a factor which allows them to benefit more from formal or explicit instruction in the school context. Younger learners, however, do not benefit from the quantity and quality of linguistic exposure typical of a natural acquisition context in which they would be allowed to make use of implicit learning abilities. It seems clear, then, that the initiative in this country to begin foreign language instruction earlier will have positive effects only if it occurs in combination with either higher levels of exposure time to the foreign language, or, alternatively, with its use as the language of instruction in other areas of the curriculum.
Resumo:
This article studies alterations in the values, attitudes, and behaviors that emerged among U.S. citizens as a consequence of, and as a response to, the attacks of September 11, 2001. The study briefly examines the immediate reaction to the attack, before focusing on the collective reactions that characterized the behavior of the majority of the population between the events of 9/11 and the response to it in the form of intervention in Afghanistan. In studying this period an eight-phase sequential model (Botcharova, 2001) is used, where the initial phases center on the nation as the ingroup and the latter focus on the enemy who carried out the attack as the outgroup. The study is conducted from a psychosocial perspective and uses "social identity theory" (Tajfel & Turner, 1979, 1986) as the basic framework for interpreting and accounting for the collective reactions recorded. The main purpose of this paper is to show that the interpretation of these collective reactions is consistent with the postulates of social identity theory. The application of this theory provides a different and specific analysis of events. The study is based on data obtained from a variety of rigorous academic studies and opinion polls conducted in relation to the events of 9/11. In line with social identity theory, 9/11 had a marked impact on the importance attached by the majority of U.S. citizens to their identity as members of a nation. This in turn accentuated group differentiation and activated ingroup favoritism and outgroup discrimination (Tajfel & Turner, 1979, 1986). Ingroup favoritism strengthened group cohesion, feelings of solidarity, and identification with the most emblematic values of the U.S. nation, while outgroup discrimination induced U.S. citizens to conceive the enemy (al-Qaeda and its protectors) as the incarnation of evil, depersonalizing the group and venting their anger on it, and to give their backing to a military response, the eventual intervention in Afghanistan. Finally, and also in line with the postulates of social identity theory, as an alternative to the virtual bipolarization of the conflict (U.S. vs al-Qaeda), the activation of a higher level of identity in the ingroup is proposed, a group that includes the United States and the largest possible number of countries¿ including Islamic states¿in the search for a common, more legitimate and effective solution.
Resumo:
This study analyses the fundamental components shaping the violence legitimation discourse of ETA (Euskadi Ta Askasuna). With this aim, a category system has been built, which organizes the psychosocial processes identified in previous studies related to violence legitimation. Based on the proposed category system, a content analysis was conducted on 21 statements of ETA, released between 1998 and 2011. An intraobserver and inter-observer reliability analysis reveals high level stability and replicability of the categorization. The results show, firstly, that outgroup components have a predominant presence over ingroup components. Secondly, in the components hierarchy, we observe that elements referring to identity come in first place, followed in similar frequencies by those related to violence representation and the definition of the situation.
Resumo:
Several models have been proposed to understand how so many species can coexist in ecosystems. Despite evidence showing that natural habitats are often patchy and fragmented, these models rarely take into account environmental spatial structure. In this study we investigated the influence of spatial structure in habitat and disturbance regime upon species' traits and species' coexistence in a metacommunity. We used a population-based model to simulate competing species in spatially explicit landscapes. The species traits we focused on were dispersal ability, competitiveness, reproductive investment and survival rate. Communities were characterized by their species richness and by the four life-history traits averaged over all the surviving species. Our results show that spatial structure and disturbance have a strong influence on the equilibrium life-history traits within a metacommunity. In the absence of disturbance, spatially structured landscapes favour species investing more in reproduction, but less in dispersal and survival. However, this influence is strongly dependent on the disturbance rate, pointing to an important interaction between spatial structure and disturbance. This interaction also plays a role in species coexistence. While spatial structure tends to reduce diversity in the absence of disturbance, the tendency is reversed when disturbance occurs. In conclusion, the spatial structure of communities is an important determinant of their diversity and characteristic traits. These traits are likely to influence important ecological properties such as resistance to invasion or response to climate change, which in turn will determine the fate of ecosystems facing the current global ecological crisis.
Resumo:
Paradise Lost can be read on various levels, some of which challenge or even contradict others. The main, explicit narrative from Genesis chapters 2 and 3 is shadowed by many other related stories. Some of these buried tales question or subvert the values made explicit in the dominant narrative. An attentive reader needs to be alert to the ways in which such references introduce teasing complexities. The approach of Satan to Eve in the ninth book of Paradise Lost is loaded in just that way with allusion to the literature of Greece and Rome. The poem recovers for this long and intricately constructed passage the weight of classical reference, especially in similes, that it had during the first Satanic books. Gardens, both classical and biblical, disguised or transformed serpents, and the weight of allusions that Eve is required to bear, all threaten to undermine the meanings of the overt narrative. The narrator has difficulty rescuing Eve from the allusions she attracts, or the many stories told about her.
Resumo:
The characterization and categorization of coarse aggregates for use in portland cement concrete (PCC) pavements is a highly refined process at the Iowa Department of Transportation. Over the past 10 to 15 years, much effort has been directed at pursuing direct testing schemes to supplement or replace existing physical testing schemes. Direct testing refers to the process of directly measuring the chemical and mineralogical properties of an aggregate and then attempting to correlate those measured properties to historical performance information (i.e., field service record). This is in contrast to indirect measurement techniques, which generally attempt to extrapolate the performance of laboratory test specimens to expected field performance. The purpose of this research project was to investigate and refine the use of direct testing methods, such as X-ray analysis techniques and thermal analysis techniques, to categorize carbonate aggregates for use in portland cement concrete. The results of this study indicated that the general testing methods that are currently used to obtain data for estimating service life tend to be very reliable and have good to excellent repeatability. Several changes in the current techniques were recommended to enhance the long-term reliability of the carbonate database. These changes can be summarized as follows: (a) Limits that are more stringent need to be set on the maximum particle size in the samples subjected to testing. This should help to improve the reliability of all three of the test methods studied during this project. (b) X-ray diffraction testing needs to be refined to incorporate the use of an internal standard. This will help to minimize the influence of sample positioning errors and it will also allow for the calculation of the concentration of the various minerals present in the samples. (c) Thermal analysis data needs to be corrected for moisture content and clay content prior to calculating the carbonate content of the sample.
Resumo:
Fetal MRI reconstruction aims at finding a high-resolution image given a small set of low-resolution images. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has considered several regularization terms s.a. Dirichlet/Laplacian energy, Total Variation (TV)- based energies and more recently non-local means. Although TV energies are quite attractive because of their ability in edge preservation, standard explicit steepest gradient techniques have been applied to optimize fetal-based TV energies. The main contribution of this work lies in the introduction of a well-posed TV algorithm from the point of view of convex optimization. Specifically, our proposed TV optimization algorithm or fetal reconstruction is optimal w.r.t. the asymptotic and iterative convergence speeds O(1/n2) and O(1/√ε), while existing techniques are in O(1/n2) and O(1/√ε). We apply our algorithm to (1) clinical newborn data, considered as ground truth, and (2) clinical fetal acquisitions. Our algorithm compares favorably with the literature in terms of speed and accuracy.