883 resultados para network-on-chip,deadlock, message-dependent-deadlock,NoC
Resumo:
Teniendo en cuenta que la imagen turística ha sido reconocida como uno de los elementos más influyentes para la competitividad de los destinos turísticos, el principal objetivo de este artículo es construir un marco conceptual que muestre la influencia de la red relacional del destino en su imagen emitida. En este contexto, se asume que la imagen tu rística es una construcción social resultante de lainteracción de los distintos agentes que intervienen en el destino turístico (administraciones públicas, instituciones locales, empresas turísticas, etc.); y se propone un modelo teórico para mostrar los efectos de la red relacional del destino turístico en la calidad de la imagen turística creada en términos de conocimiento generado y, por tanto, en su competitividad
Resumo:
MOTIVATION: The analysis of molecular coevolution provides information on the potential functional and structural implication of positions along DNA sequences, and several methods are available to identify coevolving positions using probabilistic or combinatorial approaches. The specific nucleotide or amino acid profile associated with the coevolution process is, however, not estimated, but only known profiles, such as the Watson-Crick constraint, are usually considered a priori in current measures of coevolution. RESULTS: Here, we propose a new probabilistic model, Coev, to identify coevolving positions and their associated profile in DNA sequences while incorporating the underlying phylogenetic relationships. The process of coevolution is modeled by a 16 × 16 instantaneous rate matrix that includes rates of transition as well as a profile of coevolution. We used simulated, empirical and illustrative data to evaluate our model and to compare it with a model of 'independent' evolution using Akaike Information Criterion. We showed that the Coev model is able to discriminate between coevolving and non-coevolving positions and provides better specificity and specificity than other available approaches. We further demonstrate that the identification of the profile of coevolution can shed new light on the process of dependent substitution during lineage evolution.
Resumo:
Controversial results have been reported concerning the neural mechanisms involved in the processing of rewards and punishments. On the one hand, there is evidence suggesting that monetary gains and losses activate a similar fronto-subcortical network. On the other hand, results of recent studies imply that reward and punishment may engage distinct neural mechanisms. Using functional magnetic resonance imaging (fMRI) we investigated both regional and interregional functional connectivity patterns while participants performed a gambling task featuring unexpectedly high monetary gains and losses. Classical univariate statistical analysis showed that monetary gains and losses activated a similar fronto-striatallimbic network, in which main activation peaks were observed bilaterally in the ventral striatum. Functional connectivity analysis showed similar responses for gain and loss conditions in the insular cortex, the amygdala, and the hippocampus that correlated with the activity observed in the seed region ventral striatum, with the connectivity to the amygdala appearing more pronounced after losses. Larger functional connectivity was found to the medial orbitofrontal cortex for negative outcomes. The fact that different functional patterns were obtained with both analyses suggests that the brain activations observed in the classical univariate approach identifi es the involvement of different functional networks in the current task. These results stress the importance of studying functional connectivity in addition to standard fMRI analysis in reward-related studies.
Resumo:
Learning objects have been the promise of providing people with high quality learning resources. Initiatives such as MIT Open-CourseWare, MERLOT and others have shown the real possibilities of creating and sharing knowledge through Internet. Thousands of educational resources are available through learning object repositories. We indeed live in an age of content abundance, and content can be considered as infrastructure for building adaptive and personalized learning paths, promoting both formal and informal learning. Nevertheless, although most educational institutions are adopting a more open approach, publishing huge amounts of educational resources, the reality is that these resources are barely used in other educational contexts. This paradox can be partly explained by the dificulties in adapting such resources with respect to language, e-learning standards and specifications and, finally, granularity. Furthermore, if we want our learners to use and take advantage of learning object repositories, we need to provide them with additional services than just browsing and searching for resources. Social networks can be a first step towards creating an open social community of learning around a topic or a subject. In this paper we discuss and analyze the process of using a learning object repository and building a social network on the top of it, with respect to the information architecture needed to capture and store the interaction between learners and resources in form of learning object metadata.
Resumo:
In this paper we discuss and analyze the process of using a learning object repository and building a social network on the top of it, including aspects related to open source technologies, promoting the use of the repository by means of social networks and helping learners to develop their own learning paths.
Resumo:
La théorie de l'autocatégorisation est une théorie de psychologie sociale qui porte sur la relation entre l'individu et le groupe. Elle explique le comportement de groupe par la conception de soi et des autres en tant que membres de catégories sociales, et par l'attribution aux individus des caractéristiques prototypiques de ces catégories. Il s'agit donc d'une théorie de l'individu qui est censée expliquer des phénomènes collectifs. Les situations dans lesquelles un grand nombre d'individus interagissent de manière non triviale génèrent typiquement des comportements collectifs complexes qui sont difficiles à prévoir sur la base des comportements individuels. La simulation informatique de tels systèmes est un moyen fiable d'explorer de manière systématique la dynamique du comportement collectif en fonction des spécifications individuelles. Dans cette thèse, nous présentons un modèle formel d'une partie de la théorie de l'autocatégorisation appelée principe du métacontraste. À partir de la distribution d'un ensemble d'individus sur une ou plusieurs dimensions comparatives, le modèle génère les catégories et les prototypes associés. Nous montrons que le modèle se comporte de manière cohérente par rapport à la théorie et est capable de répliquer des données expérimentales concernant divers phénomènes de groupe, dont par exemple la polarisation. De plus, il permet de décrire systématiquement les prédictions de la théorie dont il dérive, notamment dans des situations nouvelles. Au niveau collectif, plusieurs dynamiques peuvent être observées, dont la convergence vers le consensus, vers une fragmentation ou vers l'émergence d'attitudes extrêmes. Nous étudions également l'effet du réseau social sur la dynamique et montrons qu'à l'exception de la vitesse de convergence, qui augmente lorsque les distances moyennes du réseau diminuent, les types de convergences dépendent peu du réseau choisi. Nous constatons d'autre part que les individus qui se situent à la frontière des groupes (dans le réseau social ou spatialement) ont une influence déterminante sur l'issue de la dynamique. Le modèle peut par ailleurs être utilisé comme un algorithme de classification automatique. Il identifie des prototypes autour desquels sont construits des groupes. Les prototypes sont positionnés de sorte à accentuer les caractéristiques typiques des groupes, et ne sont pas forcément centraux. Enfin, si l'on considère l'ensemble des pixels d'une image comme des individus dans un espace de couleur tridimensionnel, le modèle fournit un filtre qui permet d'atténuer du bruit, d'aider à la détection d'objets et de simuler des biais de perception comme l'induction chromatique. Abstract Self-categorization theory is a social psychology theory dealing with the relation between the individual and the group. It explains group behaviour through self- and others' conception as members of social categories, and through the attribution of the proto-typical categories' characteristics to the individuals. Hence, it is a theory of the individual that intends to explain collective phenomena. Situations involving a large number of non-trivially interacting individuals typically generate complex collective behaviours, which are difficult to anticipate on the basis of individual behaviour. Computer simulation of such systems is a reliable way of systematically exploring the dynamics of the collective behaviour depending on individual specifications. In this thesis, we present a formal model of a part of self-categorization theory named metacontrast principle. Given the distribution of a set of individuals on one or several comparison dimensions, the model generates categories and their associated prototypes. We show that the model behaves coherently with respect to the theory and is able to replicate experimental data concerning various group phenomena, for example polarization. Moreover, it allows to systematically describe the predictions of the theory from which it is derived, specially in unencountered situations. At the collective level, several dynamics can be observed, among which convergence towards consensus, towards frag-mentation or towards the emergence of extreme attitudes. We also study the effect of the social network on the dynamics and show that, except for the convergence speed which raises as the mean distances on the network decrease, the observed convergence types do not depend much on the chosen network. We further note that individuals located at the border of the groups (whether in the social network or spatially) have a decisive influence on the dynamics' issue. In addition, the model can be used as an automatic classification algorithm. It identifies prototypes around which groups are built. Prototypes are positioned such as to accentuate groups' typical characteristics and are not necessarily central. Finally, if we consider the set of pixels of an image as individuals in a three-dimensional color space, the model provides a filter that allows to lessen noise, to help detecting objects and to simulate perception biases such as chromatic induction.
Resumo:
Diplomityössä esitellään menetelmiä sauvarikon toteamiseksi. Työn tarkoituksena on tutkia roottorivaurioita staattorivirran avulla. Työ jaetaan karkeasti kolmeen osa-alueeseen: oikosulkumoottorin vikoihin, roottorivaurioiden tunnistamiseen ja signaalinkäsittelymenetelmiin, jonka avulla havaitaan sauvarikko. Oikosulkumoottorin vikoja ovat staattorikäämien vauriot ja roottorivauriot. Roottorikäämien vaurioita ovat roottori sauvojen murtuminen sekä roottorisauvan irtoaminen oikosulkujenkaan päästä. Roottorivaurioiden tunnistamismenetelmiä ovat parametrin arviointi ja virtaspektrianalyysi. Työn alkuosassa esitellään oikosulkumoottorien rakenne ja toiminta. Esitellään moottoriin kohdistuvia vikoja ja etsitään ratkaisumenetelmiä roottorivaurioiden tunnistamisessa. Lopuksi tutkitaan, kuinka staattorimittaustietojen perusteella saadut tulokset voidaan käsitellä FFT -algoritmilla ja kuinka FFT -algoritmi voidaan toteuttaa sulautettuna Sharc -prosessorin avulla. Työssä käytetään ADSP 21062 EZ -LAB kehitysympäristöä, jonka avulla voidaan ajaa ohjelmia RAM-sirusta, joka on vuorovaikutuksessa SHARC -laudassa oleviin laitteisiin.
Resumo:
In this study the theoretical part was created to make comparison between different Value at Risk models. Based on that comparison one model was chosen to the empirical part which concentrated to find out whether the model is accurate to measure market risk. The purpose of this study was to test if Volatility-weighted Historical Simulation is accurate in measuring market risk and what improvements does it bring to market risk measurement compared to traditional Historical Simulation. Volatility-weighted method by Hull and White (1998) was chosen In order to improve the traditional methods capability to measure market risk. In this study we found out that result based on Historical Simulation are dependent on chosen time period, confidence level and how samples are weighted. The findings of this study are that we cannot say that the chosen method is fully reliable in measuring market risk because back testing results are changing during the time period of this study.
Resumo:
Les problèmes de santé mentale au travail constituent un défi à la fois clinique, professionnel, économique et de santé publique. Les coûts totaux qu'ils génèrent en Suisse équivalent à 3,2 % du produit intérieur brut (PIB) suisse et ils aboutissent très souvent à un licenciement. La grande majorité des personnes sont soignées par un médecin de premier recours. L'Institut de Santé au Travail propose une consultation spécialisée dans les questions de souffrance au travail, offrant aux soignants de première ligne un avis ou un soutien pluridisciplinaire, dans une perspective collaborative des soins. Son action, adaptée aux besoins de chaque situation, va d'un avis à une orientation vers des spécialistes pouvant étoffer durablement le réseau (suivi psychiatrique, programme de soutien à l'emploi, avis juridique ou social). Mental health problems at work constitute a challenge in the clinical feld, as well in the professional, the economic and the public health perspective. The total costs they generate in Switzerland are equivalent to 3.2% of the Swiss gross domestic product and they very often lead to dismissal. The vast majority of people are treated by their primary care physician. The Institute for Work and Health features a specialized consultation on the topic of suffering at work, offering the primary care physicians a pluridisciplinary advice or support, in a collaborative care prospect. Its action, adapted to each situation's needs, goes from an advice to a referral to specialists that can strengthen the network on a long-term basis (mental health follow-up, supported employment program, legal or social advice).
Resumo:
Controversial results have been reported concerning the neural mechanisms involved in the processing of rewards and punishments. On the one hand, there is evidence suggesting that monetary gains and losses activate a similar fronto-subcortical network. On the other hand, results of recent studies imply that reward and punishment may engage distinct neural mechanisms. Using functional magnetic resonance imaging (fMRI) we investigated both regional and interregional functional connectivity patterns while participants performed a gambling task featuring unexpectedly high monetary gains and losses. Classical univariate statistical analysis showed that monetary gains and losses activated a similar fronto-striatallimbic network, in which main activation peaks were observed bilaterally in the ventral striatum. Functional connectivity analysis showed similar responses for gain and loss conditions in the insular cortex, the amygdala, and the hippocampus that correlated with the activity observed in the seed region ventral striatum, with the connectivity to the amygdala appearing more pronounced after losses. Larger functional connectivity was found to the medial orbitofrontal cortex for negative outcomes. The fact that different functional patterns were obtained with both analyses suggests that the brain activations observed in the classical univariate approach identifi es the involvement of different functional networks in the current task. These results stress the importance of studying functional connectivity in addition to standard fMRI analysis in reward-related studies.
Resumo:
PURPOSE: To assess baseline predictors and consequences of medication non-adherence in the treatment of pediatric patients with attention-deficit/hyperactivity disorder (ADHD) from Central Europe and East Asia. PATIENTS AND METHODS: Data for this post-hoc analysis were taken from a 1-year prospective, observational study that included a total of 1,068 newly-diagnosed pediatric patients with ADHD symptoms from Central Europe and East Asia. Medication adherence during the week prior to each visit was assessed by treating physicians using a 5-point Likert scale, and then dichotomized into either adherent or non-adherent. Clinical severity was measured by the Clinical Global Impressions-ADHD-Severity (CGI-ADHD) scale and the Child Symptom Inventory-4 (CSI-4) Checklist. Health-Related Quality of Life (HRQoL) was measured using the Child Health and Illness Profile-Child Edition (CHIP-CE). Regression analyses were used to assess baseline predictors of overall adherence during follow-up, and the impact of time-varying adherence on subsequent outcomes: response (defined as a decrease of at least 1 point in CGI), changes in CGI-ADHD, CSI-4, and the five dimensions of CHIP-CE. RESULTS: Of the 860 patients analyzed, 64.5% (71.6% in Central Europe and 55.5% in East Asia) were rated as adherent and 35.5% as non-adherent during follow-up. Being from East Asia was found to be a strong predictor of non-adherence. In East Asia, a family history of ADHD and parental emotional distress were associated with non-adherence, while having no other children living at home was associated with non-adherence in Central Europe as well as in the overall sample. Non-adherence was associated with poorer response and less improvement on CGI-ADHD and CSI-4, but not on CHIP-CE. CONCLUSION: Non-adherence to medication is common in the treatment of ADHD, particularly in East Asia. Non-adherence was associated with poorer response and less improvement in clinical severity. A limitation of this study is that medication adherence was assessed by the treating clinician using a single item question.
Resumo:
Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.
Resumo:
Internet of Things (IoT) technologies are developing rapidly, and therefore there exist several standards of interconnection protocols and platforms. The existence of heterogeneous protocols and platforms has become a critical challenge for IoT system developers. To mitigate this challenge, few alliances and organizations have taken the initiative to build a framework that helps to integrate application silos. Some of these frameworks focus only on a specific domain like home automation. However, the resource constraints in the large proportion of connected devices make it difficult to build an interoperable system using such frameworks. Therefore, a general purpose, lightweight interoperability framework that can be used for a range of devices is required. To tackle the heterogeneous nature, this work introduces an embedded, distributed and lightweight service bus, Lightweight IoT Service bus Architecture (LISA), which fits inside the network stack of a small real-time operating system for constrained nodes. LISA provides a uniform application programming interface for an IoT system on a range of devices with variable resource constraints. It hides platform and protocol variations underneath it, thus facilitating interoperability in IoT implementations. LISA is inspired by the Network on Terminal Architecture, a service centric open architecture by Nokia Research Center. Unlike many other interoperability frameworks, LISA is designed specifically for resource constrained nodes and it provides essential features of a service bus for easy service oriented architecture implementation. The presented architecture utilizes an intermediate computing layer, a Fog layer, between the small nodes and the cloud, thereby facilitating the federation of constrained nodes into subnetworks. As a result of a modular and distributed design, the part of LISA running in the Fog layer handles the heavy lifting to assist the lightweight portion of LISA inside the resource constrained nodes. Furthermore, LISA introduces a new networking paradigm, Node Centric Networking, to route messages across protocol boundaries to facilitate interoperability. This thesis presents a concept implementation of the architecture and creates a foundation for future extension towards a comprehensive interoperability framework for IoT.