968 resultados para Execution semantics
Resumo:
Modern cloud-based applications and infrastructures may include resources and services (components) from multiple cloud providers, are heterogeneous by nature and require adjustment, composition and integration. The specific application requirements can be met with difficulty by the current static predefined cloud integration architectures and models. In this paper, we propose the Intercloud Operations and Management Framework (ICOMF) as part of the more general Intercloud Architecture Framework (ICAF) that provides a basis for building and operating a dynamically manageable multi-provider cloud ecosystem. The proposed ICOMF enables dynamic resource composition and decomposition, with a main focus on translating business models and objectives to cloud services ensembles. Our model is user-centric and focuses on the specific application execution requirements, by leveraging incubating virtualization techniques. From a cloud provider perspective, the ecosystem provides more insight into how to best customize the offerings of virtualized resources.
Resumo:
Various applications for the purposes of event detection, localization, and monitoring can benefit from the use of wireless sensor networks (WSNs). Wireless sensor networks are generally easy to deploy, with flexible topology and can support diversity of tasks thanks to the large variety of sensors that can be attached to the wireless sensor nodes. To guarantee the efficient operation of such a heterogeneous wireless sensor networks during its lifetime an appropriate management is necessary. Typically, there are three management tasks, namely monitoring, (re) configuration, and code updating. On the one hand, status information, such as battery state and node connectivity, of both the wireless sensor network and the sensor nodes has to be monitored. And on the other hand, sensor nodes have to be (re)configured, e.g., setting the sensing interval. Most importantly, new applications have to be deployed as well as bug fixes have to be applied during the network lifetime. All management tasks have to be performed in a reliable, time- and energy-efficient manner. The ability to disseminate data from one sender to multiple receivers in a reliable, time- and energy-efficient manner is critical for the execution of the management tasks, especially for code updating. Using multicast communication in wireless sensor networks is an efficient way to handle such traffic pattern. Due to the nature of code updates a multicast protocol has to support bulky traffic and endto-end reliability. Further, the limited resources of wireless sensor nodes demand an energy-efficient operation of the multicast protocol. Current data dissemination schemes do not fulfil all of the above requirements. In order to close the gap, we designed the Sensor Node Overlay Multicast (SNOMC) protocol such that to support a reliable, time-efficient and energy-efficient dissemination of data from one sender node to multiple receivers. In contrast to other multicast transport protocols, which do not support reliability mechanisms, SNOMC supports end-to-end reliability using a NACK-based reliability mechanism. The mechanism is simple and easy to implement and can significantly reduce the number of transmissions. It is complemented by a data acknowledgement after successful reception of all data fragments by the receiver nodes. In SNOMC three different caching strategies are integrated for an efficient handling of necessary retransmissions, namely, caching on each intermediate node, caching on branching nodes, or caching only on the sender node. Moreover, an option was included to pro-actively request missing fragments. SNOMC was evaluated both in the OMNeT++ simulator and in our in-house real-world testbed and compared to a number of common data dissemination protocols, such as Flooding, MPR, TinyCubus, PSFQ, and both UDP and TCP. The results showed that SNOMC outperforms the selected protocols in terms of transmission time, number of transmitted packets, and energy-consumption. Moreover, we showed that SNOMC performs well with different underlying MAC protocols, which support different levels of reliability and energy-efficiency. Thus, SNOMC can offer a robust, high-performing solution for the efficient distribution of code updates and management information in a wireless sensor network. To address the three management tasks, in this thesis we developed the Management Architecture for Wireless Sensor Networks (MARWIS). MARWIS is specifically designed for the management of heterogeneous wireless sensor networks. A distinguished feature of its design is the use of wireless mesh nodes as backbone, which enables diverse communication platforms and offloading functionality from the sensor nodes to the mesh nodes. This hierarchical architecture allows for efficient operation of the management tasks, due to the organisation of the sensor nodes into small sub-networks each managed by a mesh node. Furthermore, we developed a intuitive -based graphical user interface, which allows non-expert users to easily perform management tasks in the network. In contrast to other management frameworks, such as Mate, MANNA, TinyCubus, or code dissemination protocols, such as Impala, Trickle, and Deluge, MARWIS offers an integrated solution monitoring, configuration and code updating of sensor nodes. Integration of SNOMC into MARWIS further increases performance efficiency of the management tasks. To our knowledge, our approach is the first one, which offers a combination of a management architecture with an efficient overlay multicast transport protocol. This combination of SNOMC and MARWIS supports reliably, time- and energy-efficient operation of a heterogeneous wireless sensor network.
Resumo:
When switching tasks, if stimuli are presented that contain features that cue two of the tasks in the set (i.e., bivalent stimuli), performance slowing is observed on all tasks. This generalized slowing extends to tasks in the set which have no features in common with the bivalent stimulus and is referred to as the bivalency effect. In previous work, the bivalency effect was invoked by presenting occasionally occurring bivalent stimuli; therefore, the possibility that the generalized slowing is simply due to surprise (as opposed to bivalency) has not yet been discounted. This question was addressed in two task switching experiments where the occasionally occurring stimuli were either bivalent (bivalent version) or merely surprising (surprising version). The results confirmed that the generalized slowing was much greater in the bivalent version of both experiments, demonstrating that the magnitude of this effect is greater than can be accounted for by simple surprise. This set of results confirms that slowing task execution when encountering bivalent stimuli may be fundamental for efficient task switching, as adaptive tuning of response style may serve to prepare the cognitive system for possible future high conflict trials.
Resumo:
Traditionally, ontologies describe knowledge representation in a denotational, formalized, and deductive way. In addition, in this paper, we propose a semiotic, inductive, and approximate approach to ontology creation. We define a conceptual framework, a semantics extraction algorithm, and a first proof of concept applying the algorithm to a small set of Wikipedia documents. Intended as an extension to the prevailing top-down ontologies, we introduce an inductive fuzzy grassroots ontology, which organizes itself organically from existing natural language Web content. Using inductive and approximate reasoning to reflect the natural way in which knowledge is processed, the ontology’s bottom-up build process creates emergent semantics learned from the Web. By this means, the ontology acts as a hub for computing with words described in natural language. For Web users, the structural semantics are visualized as inductive fuzzy cognitive maps, allowing an initial form of intelligence amplification. Eventually, we present an implementation of our inductive fuzzy grassroots ontology Thus,this paper contributes an algorithm for the extraction of fuzzy grassroots ontologies from Web data by inductive fuzzy classification.
Resumo:
We developed a novel delay discounting task to investigate outcome impulsivity in pigs. As impulsivity can affect aggression, and might also relate to proactive and reactive coping styles, eight proactive (HR) and eight reactive (LR) pigs identified in a manual restraint test ("Backtest", after Bolhuis et al., 2003) were weaned and mixed in four pens of four unfamiliar pigs, so that each pen had two HR and two LR pigs, and aggression was scored in the 9h after mixing. In the delay discounting task, each pig chose between two levers, one always delivering a small immediate reward, the other a large delayed reward with daily increasing delays, impulsive individuals being the ones discounting the value of the large reward quicker. Two novel strategies emerged: some pigs gradually switched their preference towards the small reward ('Switchers') as predicted, but others persistently preferred the large reward until they stopped making choices ('Omitters'). Outcome impulsivity itself was unrelated to these strategies, to urinary serotonin metabolite (5-HIAA) or dopamine metabolite (HVA) levels, aggression at weaning, or coping style. However, HVA was relatively higher in Omitters than Switchers, and positively correlated with behavioural measures of indecisiveness and frustration during choosing. The delay discounting task thus revealed two response strategies that seemed to be related to the activity of the dopamine system and might indicate a difference in execution, rather than outcome, impulsivity.
Resumo:
Robot-assisted therapy has become increasingly common in neurorehabilitation. Sophisticated controllers have been developed for robots to assist and cooperate with the patient. It is difficult for the patient to judge to what extent the robot contributes to the execution of a movement. Therefore, methods to comprehensively quantify the patient's contribution and provide feedback are of key importance. We developed a method comprehensively to estimate the patient's contribution by combining kinematic measures and the motor assistance applied. Inverse dynamic models of the robot and the passive human arm calculate the required torques to move the robot and the arm and build, together with the recorded motor torque, a metric (in percentage) that represents the patient's contribution to the movement. To evaluate the developed metric, 12 nondisabled subjects and 7 patients with neurological problems simulated instructed movement contributions. The results are compared with a common performance metric. The estimation shows very satisfying results for both groups, even though the arm model used was strongly simplified. Displaying this metric to patients during therapy can potentially motivate them to actively participate in the training.
Resumo:
IT has turned out to be a key factor for the purposes of gaining maturity in Business Process Management (BPM). This book presents a worldwide investigation that was conducted among companies from the ‘Forbes Global 2000’ list to explore the current usage of software throughout the BPM life cycle and to identify the companies’ requirements concerning process modelling. The responses from 130 companies indicate that, at the present time, it is mainly software for process description and analysis that is required, while process execution is supported by general software such as databases, ERP systems and office tools. The resulting complex system landscapes give rise to distinct requirements for BPM software, while the process modelling requirements can be equally satisfied by the most common languages (BPMN, UML, EPC).
Resumo:
QUESTIONS UNDER STUDY: After years of advocating ABC (Airway-Breathing-Circulation), current guidelines of cardiopulmonary resuscitation (CPR) recommend CAB (Circulation-Airway-Breathing). This trial compared ABC with CAB as initial approach to CPR from the arrival of rescuers until the completion of the first resuscitation cycle. METHODS: 108 teams, consisting of two physicians each, were randomized to receive a graphical display of either the ABC algorithm or the CAB algorithm. Subsequently teams had to treat a simulated cardiac arrest. Data analysis was performed using video recordings obtained during simulations. The primary endpoint was the time to completion of the first resuscitation cycle of 30 compressions and two ventilations. RESULTS: The time to execution of the first resuscitation measure was 32 ± 12 seconds in ABC teams and 25 ± 10 seconds in CAB teams (P = 0.002). 18/53 ABC teams (34%) and none of the 55 CAB teams (P = 0.006) applied more than the recommended two initial rescue breaths which caused a longer duration of the first cycle of 30 compressions and two ventilations in ABC teams (31 ± 13 vs.23 ± 6 sec; P = 0.001). Overall, the time to completion of the first resuscitation cycle was longer in ABC teams (63 ± 17 vs. 48 ± 10 sec; P <0.0001).CONCLUSIONS: This randomized controlled trial found CAB superior to ABC with an earlier start of CPR and a shorter time to completion of the first 30:2 resuscitation cycle. These findings endorse the change from ABC to CAB in international resuscitation guidelines.
Resumo:
This paper presents a survey on the usage, opportunities and pitfalls of semantic technologies in the Internet of Things. The survey was conducted in the context of a semantic enterprise integration platform. In total we surveyed sixty-one individuals from industry and academia on their views and current usage of IoT technologies in general, and semantic technologies in particular. Our semantic enterprise integration platform aims for interoperability at a service level, as well as at a protocol level. Therefore, also questions regarding the use of application layer protocols, network layer protocols and management protocols were integrated into the survey. The survey suggests that there is still a lot of heterogeneity in IoT technologies, but first indications of the use of standardized protocols exist. Semantic technologies are being recognized as of potential use, mainly in the management of things and services. Nonetheless, the participants still see many obstacles which hinder the widespread use of semantic technologies: Firstly, a lack of training as traditional embedded programmers are not well aware of semantic technologies. Secondly, a lack of standardization in ontologies, which would enable interoperability and thirdly, a lack of good tooling support.
Resumo:
Bacterial meningitis causes neuronal apoptosis in the hippocampal dentate gyrus, which is associated with learning and memory impairments after cured disease. The execution of the apoptotic program involves pathways that converge on activation of caspase-3, which is required for morphological changes associated with apoptosis. Here, the time course and the role of caspase-3 in neuronal apoptosis was assessed in an infant rat model of pneumococcal meningitis. During clinically asymptotic meningitis (0-12 h after infection), only minor apoptotic damage to the dentate gyrus was observed, while the acute phase (18-24 h) was characterized by a massive increase of apoptotic cells, which peaked at 36 h. In the subacute phase of the disease (36-72 h), the number of apoptotic cells decreased to control levels. Enzymatic caspase-3 activity was significantly increased in hippocampal tissue of infected animals compared to controls at 22 h. The activated enzyme was localized to immature cells of the dentate gyrus, and in vivo activity was evidenced by cleavage of the amyloid-beta precursor protein. Intracisternal administration of the caspase-3-specific inhibitor Ac-DEVD-CHO significantly reduced apoptosis in the hippocampal dentate gyrus. In contrast to a study where the decrease of hippocampal apoptosis after administration of a pan-caspase inhibitor was due to downmodulation of the inflammatory response, our data demonstrate that specific inhibition of caspase-3 did not affect inflammation assessed by TNF-alpha and IL-1beta concentrations in the cerebrospinal fluid space. Taken together, the present results identify caspase-3 as a key effector of neuronal apoptosis in pneumococcal meningitis.
Resumo:
There is great demand for easily-accessible, user-friendly dietary self-management applications. Yet accurate, fully-automatic estimation of nutritional intake using computer vision methods remains an open research problem. One key element of this problem is the volume estimation, which can be computed from 3D models obtained using multi-view geometry. The paper presents a computational system for volume estimation based on the processing of two meal images. A 3D model of the served meal is reconstructed using the acquired images and the volume is computed from the shape. The algorithm was tested on food models (dummy foods) with known volume and on real served food. Volume accuracy was in the order of 90 %, while the total execution time was below 15 seconds per image pair. The proposed system combines simple and computational affordable methods for 3D reconstruction, remained stable throughout the experiments, operates in near real time, and places minimum constraints on users.
Resumo:
Introduction So far, social psychology in sport has preliminary focused on team cohesion, and many studies and meta-analyses tried to demonstrate a relation between cohesiveness of a team and its performance. How a team really co-operates and how the individual actions are integrated towards a team action is a question that has received relatively little attention in research. This may, at least in part, be due to a lack of a theoretical framework for collective actions, a dearth that has only recently begun to challenge sport psychologists. Objectives In this presentation a framework for a comprehensive theory of teams in sport is outlined and its potential to integrate research in the domain of team performance and, more specifically, the following presentations, is put up for discussion. Method Based on a model developed by von Cranach, Ochsenbein and Valach (1986), teams are considered to be information processing organisms, and team actions need to be investigated on two levels: the individual team member and the group as an entity. Elements to be considered are the task, the social structure, the information processing structure and the execution structure. Obviously, different task require different social structures, communication processes and co-ordination of individual movements. Especially in rapid interactive sports planning and execution of movements based on feedback loops are not possible. Deliberate planning may be a solution mainly for offensive actions, whereas defensive actions have to adjust to the opponent team's actions. Consequently, mental representations must be developed to allow a feed-forward regulation of team member's actions. Results and Conclusions Some preliminary findings based on this conceptual framework as well as further consequences for empirical investigations will be presented. References Cranach, M.v., Ochsenbein, G. & Valach, L. (1986). The group as a self-active system: Outline of a theory of group action. European Journal of Social Psychology, 16, 193-229.
Resumo:
Execution of an enzymatic reaction performed in a capillary with subsequent electrophoretic analysis of the formed products is referred to as electrophoretically mediated microanalysis (EMMA). An EMMA method was developed to investigate the stereoselectivity of the CYP3A4-mediated N-demethylation of ketamine. Ketamine was incubated in a 50 μm id bare fused-silica capillary together with human CYP3A4 Supersomes using a 100 mM phosphate buffer (pH 7.4) at 37°C. A plug containing racemic ketamine and the NADPH regenerating system including all required cofactors for the enzymatic reaction was injected, followed by a plug of the metabolizing enzyme CYP3A4 (500 nM). These two plugs were bracketed by plugs of incubation buffer to ensure proper conditions for the enzymatic reaction. The rest of the capillary was filled with a pH 2.5 running buffer comprising 50 mM Tris, phosphoric acid, and 2% w/v of highly sulfated γ-cyclodextrin. Mixing of reaction plugs was enhanced via application of -10 kV for 10 s. After an incubation of 8 min at 37°C without power application (zero-potential amplification), the capillary was cooled to 25°C within 3 min followed by application of -10 kV for the separation and detection of the formed enantiomers of norketamine. Norketamine formation rates were fitted to the Michaelis-Menten model and the elucidated values for V(max) and K(m) were found to be comparable to those obtained from the off-line assay of a previous study.
Resumo:
Equipped with state-of-the-art smartphones and mobile devices, today's highly interconnected urban population is increasingly dependent on these gadgets to organize and plan their daily lives. These applications often rely on current (or preferred) locations of individual users or a group of users to provide the desired service, which jeopardizes their privacy; users do not necessarily want to reveal their current (or preferred) locations to the service provider or to other, possibly untrusted, users. In this paper, we propose privacy-preserving algorithms for determining an optimal meeting location for a group of users. We perform a thorough privacy evaluation by formally quantifying privacy-loss of the proposed approaches. In order to study the performance of our algorithms in a real deployment, we implement and test their execution efficiency on Nokia smartphones. By means of a targeted user-study, we attempt to get an insight into the privacy-awareness of users in location-based services and the usability of the proposed solutions.