908 resultados para Task-to-core mapping


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present doctoral thesis discusses the ways to improve the performance of driving simulator, provide objective measures for the road safety evaluation methodology based on driver’s behavior and response and investigates the drivers' adaptation to the driving assistant systems. The activities are divided into two macro areas; the driving simulation studies and on-road experiments. During the driving simulation experimentation, the classical motion cueing algorithm with logarithmic scale was implemented in the 2DOF motion cueing simulator and the motion cues were found desirable by the participants. In addition, it found out that motion stimuli could change the behaviour of the drivers in terms of depth/distance perception. During the on-road experimentations, The driver gaze behaviour was investigated to find the objective measures on the visibility of the road signs and reaction time of the drivers. The sensor infusion and the vehicle monitoring instruments were found useful for an objective assessment of the pavement condition and the drivers’ performance. In the last chapter of the thesis, the safety assessment during the use of level 1 automated driving “ACC” is discussed with the simulator and on-road experiment. The drivers’ visual behaviour was investigated in both studies with innovative classification method to find the epochs of the distraction of the drivers. The behavioural adaptation to ACC showed that drivers may divert their attention away from the driving task to engage in secondary, non-driving-related tasks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Collecting and analysing data is an important element in any field of human activity and research. Even in sports, collecting and analyzing statistical data is attracting a growing interest. Some exemplar use cases are: improvement of technical/tactical aspects for team coaches, definition of game strategies based on the opposite team play or evaluation of the performance of players. Other advantages are related to taking more precise and impartial judgment in referee decisions: a wrong decision can change the outcomes of important matches. Finally, it can be useful to provide better representations and graphic effects that make the game more engaging for the audience during the match. Nowadays it is possible to delegate this type of task to automatic software systems that can use cameras or even hardware sensors to collect images or data and process them. One of the most efficient methods to collect data is to process the video images of the sporting event through mixed techniques concerning machine learning applied to computer vision. As in other domains in which computer vision can be applied, the main tasks in sports are related to object detection, player tracking, and to the pose estimation of athletes. The goal of the present thesis is to apply different models of CNNs to analyze volleyball matches. Starting from video frames of a volleyball match, we reproduce a bird's eye view of the playing court where all the players are projected, reporting also for each player the type of action she/he is performing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reinforcement Learning (RL) provides a powerful framework to address sequential decision-making problems in which the transition dynamics is unknown or too complex to be represented. The RL approach is based on speculating what is the best decision to make given sample estimates obtained from previous interactions, a recipe that led to several breakthroughs in various domains, ranging from game playing to robotics. Despite their success, current RL methods hardly generalize from one task to another, and achieving the kind of generalization obtained through unsupervised pre-training in non-sequential problems seems unthinkable. Unsupervised RL has recently emerged as a way to improve generalization of RL methods. Just as its non-sequential counterpart, the unsupervised RL framework comprises two phases: An unsupervised pre-training phase, in which the agent interacts with the environment without external feedback, and a supervised fine-tuning phase, in which the agent aims to efficiently solve a task in the same environment by exploiting the knowledge acquired during pre-training. In this thesis, we study unsupervised RL via state entropy maximization, in which the agent makes use of the unsupervised interactions to pre-train a policy that maximizes the entropy of its induced state distribution. First, we provide a theoretical characterization of the learning problem by considering a convex RL formulation that subsumes state entropy maximization. Our analysis shows that maximizing the state entropy in finite trials is inherently harder than RL. Then, we study the state entropy maximization problem from an optimization perspective. Especially, we show that the primal formulation of the corresponding optimization problem can be (approximately) addressed through tractable linear programs. Finally, we provide the first practical methodologies for state entropy maximization in complex domains, both when the pre-training takes place in a single environment as well as multiple environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Correctness of information gathered in production environments is an essential part of quality assurance processes in many industries, this task is often performed by human resources who visually take annotations in various steps of the production flow. Depending on the performed task the correlation between where exactly the information is gathered and what it represents is more than often lost in the process. The lack of labeled data places a great boundary on the application of deep neural networks aimed at object detection tasks, moreover supervised training of deep models requires a great amount of data to be available. Reaching an adequate large collection of labeled images through classic techniques of data annotations is an exhausting and costly task to perform, not always suitable for every scenario. A possible solution is to generate synthetic data that replicates the real one and use it to fine-tune a deep neural network trained on one or more source domains to a different target domain. The purpose of this thesis is to show a real case scenario where the provided data were both in great scarcity and missing the required annotations. Sequentially a possible approach is presented where synthetic data has been generated to address those issues while standing as a training base of deep neural networks for object detection, capable of working on images taken in production-like environments. Lastly, it compares performance on different types of synthetic data and convolutional neural networks used as backbones for the model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The understanding of the molecular mechanisms leading to peptide action entails the identification of a core active site. The major 28-aa neuropeptide, vasoactive intestinal peptide (VIP), provides neuroprotection. A lipophilic derivative with a stearyl moiety at the N-terminal and norleucine residue replacing the Met-17 was 100-fold more potent than VIP in promoting neuronal survival, acting at femtomolar–picomolar concentration. To identify the active site in VIP, over 50 related fragments containing an N-terminal stearic acid attachment and an amidated C terminus were designed, synthesized, and tested for neuroprotective properties. Stearyl-Lys-Lys-Tyr-Leu-NH2 (derived from the C terminus of VIP and the related peptide, pituitary adenylate cyclase activating peptide) captured the neurotrophic effects offered by the entire 28-aa parent lipophilic derivative and protected against β-amyloid toxicity in vitro. Furthermore, the 4-aa lipophilic peptide recognized VIP-binding sites and enhanced choline acetyltransferase activity as well as cognitive functions in Alzheimer’s disease-related in vivo models. Biodistribution studies following intranasal administration of radiolabeled peptide demonstrated intact peptide in the brain 30 min after administration. Thus, lipophilic peptide fragments offer bioavailability and stability, providing lead compounds for drug design against neurodegenerative diseases.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Many-core platforms based on Network-on-Chip (NoC [Benini and De Micheli 2002]) present an emerging technology in the real-time embedded domain. Although the idea to group the applications previously executed on separated single-core devices, and accommodate them on an individual many-core chip offers various options for power savings, cost reductions and contributes to the overall system flexibility, its implementation is a non-trivial task. In this paper we address the issue of application mapping onto a NoCbased many-core platform when considering fundamentals and trends of current many-core operating systems, specifically, we elaborate on a limited migrative application model encompassing a message-passing paradigm as a communication primitive. As the main contribution, we formulate the problem of real-time application mapping, and propose a three-stage process to efficiently solve it. Through analysis it is assured that derived solutions guarantee the fulfilment of posed time constraints regarding worst-case communication latencies, and at the same time provide an environment to perform load balancing for e.g. thermal, energy, fault tolerance or performance reasons.We also propose several constraints regarding the topological structure of the application mapping, as well as the inter- and intra-application communication patterns, which efficiently solve the issues of pessimism and/or intractability when performing the analysis.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Modern multicore processors for the embedded market are often heterogeneous in nature. One feature often available are multiple sleep states with varying transition cost for entering and leaving said sleep states. This research effort explores the energy efficient task-mapping on such a heterogeneous multicore platform to reduce overall energy consumption of the system. This is performed in the context of a partitioned scheduling approach and a very realistic power model, which improves over some of the simplifying assumptions often made in the state-of-the-art. The developed heuristic consists of two phases, in the first phase, tasks are allocated to minimise their active energy consumption, while the second phase trades off a higher active energy consumption for an increased ability to exploit savings through more efficient sleep states. Extensive simulations demonstrate the effectiveness of the approach.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

After ischemic stroke, the ischemic damage to brain tissue evolves over time and with an uneven spatial distribution. Early irreversible changes occur in the ischemic core, whereas, in the penumbra, which receives more collateral blood flow, the damage is more mild and delayed. A better characterization of the penumbra, irreversibly damaged and healthy tissues is needed to understand the mechanisms involved in tissue death. MRSI is a powerful tool for this task if the scan time can be decreased whilst maintaining high sensitivity. Therefore, we made improvements to a (1) H MRSI protocol to study middle cerebral artery occlusion in mice. The spatial distribution of changes in the neurochemical profile was investigated, with an effective spatial resolution of 1.4 μL, applying the protocol on a 14.1-T magnet. The acquired maps included the difficult-to-separate glutamate and glutamine resonances and, to our knowledge, the first mapping of metabolites γ-aminobutyric acid and glutathione in vivo, within a metabolite measurement time of 45 min. The maps were in excellent agreement with findings from single-voxel spectroscopy and offer spatial information at a scan time acceptable for most animal models. The metabolites measured differed with respect to the temporal evolution of their concentrations and the localization of these changes. Specifically, lactate and N-acetylaspartate concentration changes largely overlapped with the T(2) -hyperintense region visualized with MRI, whereas changes in cholines and glutathione affected the entire middle cerebral artery territory. Glutamine maps showed elevated levels in the ischemic striatum until 8 h after reperfusion, and until 24 h in cortical tissue, indicating differences in excitotoxic effects and secondary energy failure in these tissue types. Copyright © 2011 John Wiley & Sons, Ltd.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

We have mapped the genes coding for two major structural polypeptides of the vaccinia virus core by hybrid selection and transcriptional mapping. First, RNA was selected by hybridization to restriction fragments of the vaccinia virus genome, translated in vitro and the products were immunoprecipitated with antibodies against the two polypeptides. This approach allowed us to map the genes to the left hand end of the largest Hind III restriction fragment of 50 kilobase pairs. Second, transcriptional mapping of this region of the genome revealed the presence of the two expected RNAs. Both RNAs are transcribed from the leftward reading strand and the 5'-ends of the genes are separated by about 7.5 kilobase pairs of DNA. Thus, two genes encoding structural polypeptides with a similar location in the vaccinia virus particle are clustered at approximately 105 kilobase pairs from the left hand end of the 180 kilobase pair vaccinia virus genome.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

A description of a data item's provenance can be provided in dierent forms, and which form is best depends on the intended use of that description. Because of this, dierent communities have made quite distinct underlying assumptions in their models for electronically representing provenance. Approaches deriving from the library and archiving communities emphasise agreed vocabulary by which resources can be described and, in particular, assert their attribution (who created the resource, who modied it, where it was stored etc.) The primary purpose here is to provide intuitive metadata by which users can search for and index resources. In comparison, models for representing the results of scientific workflows have been developed with the assumption that each event or piece of intermediary data in a process' execution can and should be documented, to give a full account of the experiment undertaken. These occurrences are connected together by stating where one derived from, triggered, or otherwise caused another, and so form a causal graph. Mapping between the two approaches would be benecial in integrating systems and exploiting the strengths of each. In this paper, we specify such a mapping between Dublin Core and the Open Provenance Model. We further explain the technical issues to overcome and the rationale behind the approach, to allow the same method to apply in mapping similar schemes.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Two regions common to all UsnRNP core polypeptides have been described: Sm motif 1 and Sm motif 2. Rabbits were immunized with a 22 amino-acid peptide containing one segment of Sm motif 1 (YRGTLVSTDNYFNLQL-NEAEEF, corresponding to residues 11-32) from yeast F protein. After immunization, the rabbit sera contained antibodies that not only reacted specifically with the peptide from yeast F protein but also cross-reacted with Sm polypeptides from mammals; that is, with purified human U1snRNPs. The results suggest that the peptide used and human Sm polypeptides contain a common feature recognized by the polyclonal antibodies. A large collection of human systemic lupus erythematosus sera was assayed using the yeast peptide as an antigen source. Seventy per cent of systemic lupus erythematosus sera contain an antibody specificity that cross-reacts with the yeast peptide.