890 resultados para Context Model
Resumo:
In this paper, we present a computer simulation study of the ion binding process at an ionizable surface using a semi-grand canonical Monte Carlo method that models the surface as a discrete distribution of charged and neutral functional groups in equilibrium with explicit ions modelled in the context of the primitive model. The parameters of the simulation model were tuned and checked by comparison with experimental titrations of carboxylated latex particles in the presence of different ionic strengths of monovalent ions. The titration of these particles was analysed by calculating the degree of dissociation of the latex functional groups vs. pH curves at different background salt concentrations. As the charge of the titrated surface changes during the simulation, a procedure to keep the electroneutrality of the system is required. Here, two approaches are used with the choice depending on the ion selected to maintain electroneutrality: counterion or coion procedures. We compare and discuss the difference between the procedures. The simulations also provided a microscopic description of the electrostatic double layer (EDL) structure as a function of pH and ionic strength. The results allow us to quantify the effect of the size of the background salt ions and of the surface functional groups on the degree of dissociation. The non-homogeneous structure of the EDL was revealed by plotting the counterion density profiles around charged and neutral surface functional groups. © 2011 American Institute of Physics.
Resumo:
This paper presents the quantitative and qualitative findings from an experiment designed to evaluate a developing model of affective postures for full-body virtual characters in immersive virtual environments (IVEs). Forty-nine participants were each requested to explore a virtual environment by asking two virtual characters for instructions. The participants used a CAVE-like system to explore the environment. Participant responses and their impression of the virtual characters were evaluated through a wide variety of both quantitative and qualitative methods. Combining a controlled experimental approach with various data-collection methods provided a number of advantages such as providing a reason to the quantitative results. The quantitative results indicate that posture plays an important role in the communication of affect by virtual characters. The qualitative findings indicated that participants attribute a variety of psychological states to the behavioral cues displayed by virtual characters. In addition, participants tended to interpret the social context portrayed by the virtual characters in a holistic manner. This suggests that one aspect of the virtual scene colors the perception of the whole social context portrayed by the virtual characters. We conclude by discussing the importance of designing holistically congruent virtual characters especially in immersive settings.
Resumo:
Diplomityön tarkoituksena oli arvioida akvisition jälkeistä integraatioprosessia. Integraation tarkoitus on mukauttaa ostettu yritys toimivaksi osaksi konsernia. Työn empiirisenä ongelmana oli yleisesti tunnustettu integraatiojohtamisen kompleksisuus. Samoin myöskin akateemisesta kirjallisuudesta puuttui koherentti malli, jolla arvioida integraatiota. Tutkimuskohteena oli akvisitio, jossa suomalainen tietotekniikkan suuryritys osti osake-enemmistön tsekkiläisestä keskisuuresta ohjelmistoyrityksestä. Tutkimuksessa generoitiin integraatiojohtamisen malli tietopohjaiseen organisaatioon. Mallin mukaan integraatio koostuu kolmesta eriävästä, mutta toisiaan tukevasta alueesta: organisaatiokulttuurin yhdentyminen, tietopääoman tasaaminen ja konsernin sisäisten prosessien yhdenmukaistaminen. Näistä kaksi kaksi jälkimmäistä ovat johdettavissa, mutta kulttuurin yhdentymiseen integraatiojohtamisella voidaan vaikuttaa vain katalysoivasti. Organisaatiokulttuuri levittäytyy vain osallisten vuorovaikuksien kautta. Lisäksi tutkimus osoitti, miten akvisitio on revolutionaarinen vaihe yrityksen kehityksessä. Integraation ensimmäinen ajanjakso on revolutionaarista. Tällöin suurimmat ja näkyvimmät johdettavat muutokset pyritään saamaan aikaan, jotta integraatiossa edettäisiin evolutionaariseen kehitykseen. Revolutionaarisen intergaation vetojuhtana toimii integraatiojohto, kun taas evolutionaarinen integraatio etenee osallisten (organisaation jäsenten) itsensä toiminnan ja vuorovaikutusten kautta.
Resumo:
Many species are able to learn to associate behaviours with rewards as this gives fitness advantages in changing environments. Social interactions between population members may, however, require more cognitive abilities than simple trial-and-error learning, in particular the capacity to make accurate hypotheses about the material payoff consequences of alternative action combinations. It is unclear in this context whether natural selection necessarily favours individuals to use information about payoffs associated with nontried actions (hypothetical payoffs), as opposed to simple reinforcement of realized payoff. Here, we develop an evolutionary model in which individuals are genetically determined to use either trial-and-error learning or learning based on hypothetical reinforcements, and ask what is the evolutionarily stable learning rule under pairwise symmetric two-action stochastic repeated games played over the individual's lifetime. We analyse through stochastic approximation theory and simulations the learning dynamics on the behavioural timescale, and derive conditions where trial-and-error learning outcompetes hypothetical reinforcement learning on the evolutionary timescale. This occurs in particular under repeated cooperative interactions with the same partner. By contrast, we find that hypothetical reinforcement learners tend to be favoured under random interactions, but stable polymorphisms can also obtain where trial-and-error learners are maintained at a low frequency. We conclude that specific game structures can select for trial-and-error learning even in the absence of costs of cognition, which illustrates that cost-free increased cognition can be counterselected under social interactions.
Resumo:
Evidences collected from smartphones users show a growing desire of personalization offered by services for mobile devices. However, the need to accurately identify users' contexts has important implications for user's privacy and it increases the amount of trust, which users are requested to have in the service providers. In this paper, we introduce a model that describes the role of personalization and control in users' assessment of cost and benefits associated to the disclosure of private information. We present an instantiation of such model, a context-aware application for smartphones based on the Android operating system, in which users' private information are protected. Focus group interviews were conducted to examine users' privacy concerns before and after having used our application. Obtained results confirm the utility of our artifact and provide support to our theoretical model, which extends previous literature on privacy calculus and user's acceptance of context-aware technology.
Resumo:
Diagrams and tools help to support task modelling in engi- neering and process management. Unfortunately they are unfit to help in a business context at a strategic level, because of the flexibility needed for creative thinking and user friendly interactions. We propose a tool which bridges the gap between freedom of actions, encouraging creativity, and constraints, allowing validation and advanced features.
Resumo:
Among unidentified gamma-ray sources in the galactic plane, there are some that present significant variability and have been proposed to be high-mass microquasars. To deepen the study of the possible association between variable low galactic latitude gamma-ray sources and microquasars, we have applied a leptonic jet model based on the microquasar scenario that reproduces the gamma-ray spectrum of three unidentified gamma-ray sources, 3EG J1735-1500, 3EG J1828+0142 and GRO J1411-64, and is consistent with the observational constraints at lower energies. We conclude that if these sources were generated by microquasars, the particle acceleration processes could not be as efficient as in other objects of this type that present harder gamma-ray spectra. Moreover, the dominant mechanism of high-energy emission should be synchrotron self-Compton (SSC) scattering, and the radio jets may only be observed at low frequencies. For each particular case, further predictions of jet physical conditions and variability generation mechanisms have been made in the context of the model. Although there might be other candidates able to explain the emission coming from these sources, microquasars cannot be excluded as counterparts. Observations performed by the next generation of gamma-ray instruments, like GLAST, are required to test the proposed model.
Resumo:
Context. The understanding of Galaxy evolution can be facilitated by the use of population synthesis models, which allow to test hypotheses on the star formation history, star evolution, as well as chemical and dynamical evolution of the Galaxy. Aims. The new version of the Besanc¸on Galaxy Model (hereafter BGM) aims to provide a more flexible and powerful tool to investigate the Initial Mass Function (IMF) and Star Formation Rate (SFR) of the Galactic disc. Methods. We present a new strategy for the generation of thin disc stars which assumes the IMF, SFR and evolutionary tracks as free parameters. We have updated most of the ingredients for the star count production and, for the first time, binary stars are generated in a consistent way. We keep in this new scheme the local dynamical self-consistency as in Bienayme et al (1987). We then compare simulations from the new model with Tycho-2 data and the local luminosity function, as a first test to verify and constrain the new ingredients. The effects of changing thirteen different ingredients of the model are systematically studied. Results. For the first time, a full sky comparison is performed between BGM and data. This strategy allows to constrain the IMF slope at high masses which is found to be close to 3.0, excluding a shallower slope such as Salpeter"s one. The SFR is found decreasing whatever IMF is assumed. The model is compatible with a local dark matter density of 0.011 M pc−3 implying that there is no compelling evidence for significant amount of dark matter in the disc. While the model is fitted to Tycho2 data, a magnitude limited sample with V<11, we check that it is still consistent with fainter stars. Conclusions. The new model constitutes a new basis for further comparisons with large scale surveys and is being prepared to become a powerful tool for the analysis of the Gaia mission data.
Resumo:
This study explored ethnic identity among 410 mestizo students who were attending one of three universities, which varied in their ethnic composition and their educative model. One of these universities was private and had mostly mestizo students such as the public one did. The third educative context, also public, had an intercultural model of education and the students were mixed among mestizo and indigenous. The Multigroup Ethnic Identity Measure (MEIM) was administered to high school students in order to compare their scores on ethnic identity and its components: affi rmation, belonging or commitment and exploration. Principle components factor analysis with varimax rotation and tests of mean group differences are performed. The results showed signifi cant differences between the studied groups. Scores on ethnic identity and its components were signifi cantly higher among mestizos group from University with intercultural model of education than mestizos from public and private universities of the same region. Implications of these fi ndings for education are considered, as they are the strengths as well as the limitations of this research
Resumo:
Bandura (1986) developed the concept of moral disengagement to explain how individuals can engage in detrimental behavior while experiencing low levels of negative feelings such as guilt-feelings. Most of the research conducted on moral disengagement investigated this concept as a global concept (e.g., Bandura, Barbaranelli, Caprara, & Pastorelli, 1996; Moore, Detert, Klebe Treviño, Baker, & Mayer, 2012) while Bandura (1986, 1990) initially developed eight distinct mechanisms of moral disengagement grouped into four categories representing the various means through which moral disengagement can operate. In our work, we propose to develop measures of this concept based on its categories, namely rightness of actions, rejection of personal responsibility, distortion of negative consequences, and negative perception of the victims, and which is not specific a particular area of research. Through our measures, we aim at better understanding the cognitive process leading individuals to behave unethically by investigating which category plays a role in explaining unethical behavior depending on the situations in which individuals are. To this purpose, we conducted five studies to develop the measures and to test its predictive validity. Particularly, we assessed the ability of the newly developed measures to predict two types of unethical behaviors, i.e. discriminatory behavior and cheating behavior. Confirmatory Factor analyses demonstrated a good fit of the model and findings generally supported our predictions.
Resumo:
En la implementació del CLIL a l’educació superior, apart d’estudis sobre el nivell de l’estudiantat i la disponibilitat del professorat, i de l’elaboració de material educatiu interdisciplinari, el repte actual és aconseguir que s’involucrin activament en CLIL els professors de contingut d’un ventall ampli de disciplines. En aquesta comunicació es presenten les bases d’un model per un sistema CLIL, utilitzant la dinàmica newtoniana. Pot ser un model interessant i plausible en un context universitari científic i tecnològic, on fins ara el CLIL s’ha implementat només lleugerament.
Resumo:
Diabetic retinopathy is the leading cause of visual loss in individuals under the age of 55. Most investigations into the pathogenesis of diabetic retinopathy have been concentrated on the neural retina since this is where clinical lesions are manifested. Recently, however, various abnormalities in the structural and secretory functions of retinal pigment epithelium that are essential for neuroretina survival, have been found in diabetic retinopathy. In this context, here we study the effect of hyperglycemic and hypoxic conditions on the metabolism of a human retinal pigment epithelial cell line (ARPE-19) by integrating quantitative proteomics using tandem mass tagging (TMT), untargeted metabolomics using MS and NMR, and 13C-glucose isotopic labeling for metabolic tracking. We observed a remarkable metabolic diversification under our simulated in vitro hyperglycemic conditions of diabetes, characterized increased flux through polyol pathways and inhibition of the Krebs cycle and oxidative phosphorylation. Importantly, under low oxygen supply RPE cells seem to consume rapidly glycogen storages and stimulate anaerobic glycolysis. Our results therefore pave the way to future scenarios involving new therapeutic strategies addressed to modulating RPE metabolic impairment, with the aim of regulating structural and secretory alterations of RPE. Finally, this study shows the importance of tackling biomedical problems by integrating metabolomic and proteomics results.
Resumo:
La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.
Resumo:
BACKGROUND: Core body temperature is used to stage and guide the management of hypothermic patients, however obtaining accurate measurements of core temperature is challenging, especially in the pre-hospital context. The Swiss staging model for hypothermia uses clinical indicators to stage hypothermia. The proposed temperature range for clinical stage 1 is <35-32 °C (95-90 °F), for stage 2, <32-28 °C (<90-82 °F) for stage 3, <28-24 °C (<82-75 °F), and for stage 4 below 24 °C (75 °F). However, the evidence relating these temperature ranges to the clinical stages needs to be strengthened. METHODS: Medline was used to retrieve data on as many cases of accidental hypothermia (core body temperature <35 °C (95 °F)) as possible. Cases of therapeutic or neonatal hypothermia and those with confounders or insufficient data were excluded. To evaluate the Swiss staging model for hypothermia, we estimated the percentage of those patients who were correctly classified and compared the theoretical with the observed ranges of temperatures for each clinical stage. The number of rescue collapses was also recorded. RESULTS: We analysed 183 cases; the median temperature for the sample was 25.2 °C (IQR 22-28). 95 of the 183 patients (51.9 %; 95 % CI = 44.7 %-59.2 %) were correctly classified, while the temperature was overestimated in 36 patients (19.7 %; 95 % CI = 13.9 %-25.4 %). We observed important overlaps among the four stage groups with respect to core temperature, the lowest observed temperature being 28.1 °C for Stage 1, 22 °C for Stage 2, 19.3 °C for Stage 3, and 13.7 °C for stage 4. CONCLUSION: Predicting core body temperature using clinical indicators is a difficult task. Despite the inherent limitations of our study, it increases the strength of the evidence linking the clinical hypothermia stage to core temperature. Decreasing the thresholds of temperatures distinguishing the different stages would allow a reduction in the number of cases where body temperature is overestimated, avoiding some potentially negative consequences for the management of hypothermic patients.
Resumo:
Political actors use ICTs in a different manner and in different degrees when it comes to achieving a closer relationship between the public and politicians. Usually, political parties develop ICT strategies only for electoral campaigning and therefore restrain ICT usages to providing information and establishing a few channels of communication. By contrast, local governments make much more use of ICT tools for participatory and deliberative purposes. These differences in usages have not been well explained in the literature because of a lack of a comprehensive explanatory model. This chapter seeks to build the basis for this model, that is, to establish which factors affect and condition different political uses of ICTs and which principles underlie that behaviour. We consider that political actors are intentional and their behaviour is mediated by the political institutions and the socioeconomic context of the country. Also, though, the actor¿s own characteristics, such as the type and size of the organization or the model of e-democracy that the actor upholds, can have an influence in launching ICT initiatives for approaching the public.