693 resultados para workload


Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN] It was investigated whether skeletal muscle K(+) release is linked to the degree of anaerobic energy production. Six subjects performed an incremental bicycle exercise test in normoxic and hypoxic conditions prior to and after 2 and 8 wk of acclimatization to 4,100 m. The highest workload completed by all subjects in all trials was 260 W. With acute hypoxic exposure prior to acclimatization, venous plasma [K(+)] was lower (P < 0.05) in normoxia (4.9 +/- 0.1 mM) than hypoxia (5.2 +/- 0.2 mM) at 260 W, but similar at exhaustion, which occurred at 400 +/- 9 W and 307 +/- 7 W (P < 0.05), respectively. At the same absolute exercise intensity, leg net K(+) release was unaffected by hypoxic exposure independent of acclimatization. After 8 wk of acclimatization, no difference existed in venous plasma [K(+)] between the normoxic and hypoxic trial, either at submaximal intensities or at exhaustion (360 +/- 14 W vs. 313 +/- 8 W; P < 0.05). At the same absolute exercise intensity, leg net K(+) release was less (P < 0.001) than prior to acclimatization and reached negative values in both hypoxic and normoxic conditions after acclimatization. Moreover, the reduction in plasma volume during exercise relative to rest was less (P < 0.01) in normoxic than hypoxic conditions, irrespective of the degree of acclimatization (at 260 W prior to acclimatization: -4.9 +/- 0.8% in normoxia and -10.0 +/- 0.4% in hypoxia). It is concluded that leg net K(+) release is unrelated to anaerobic energy production and that acclimatization reduces leg net K(+) release during exercise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN] We hypothesized that reliance on lactate as a means of energy distribution is higher after a prolonged period of acclimatization (9 wk) than it is at sea level due to a higher lactate Ra and disposal from active skeletal muscle. To evaluate this hypothesis, six Danish lowlanders (25 +/- 2 yr) were studied at rest and during 20 min of bicycle exercise at 146 W at sea level (SL) and after 9 wk of acclimatization to 5,260 m (Alt). Whole body glucose Ra was similar at SL and Alt at rest and during exercise. Lactate Ra was also similar for the two conditions at rest; however, during exercise, lactate Ra was substantially lower at SL (65 micro mol. min(-1). kg body wt(-1)) than it was at Alt (150 micro mol. min(-1). kg body wt(-1)) at the same exercise intensity. During exercise, net lactate release was approximately 6-fold at Alt compared with SL, and related to this, tracer-calculated leg lactate uptake and release were both 3- or 4-fold higher at Alt compared with SL. The contribution of the two legs to glucose disposal was similar at SL and Alt; however, the contribution of the two legs to lactate Ra was significantly lower at rest and during exercise at SL (27 and 81%) than it was at Alt (45 and 123%). In conclusion, at rest and during exercise at the same absolute workload, CHO and blood glucose utilization were similar at SL and at Alt. Leg net lactate release was severalfold higher, and the contribution of leg lactate release to whole body lactate Ra was higher at Alt compared with SL. During exercise, the relative contribution of lactate oxidation to whole body CHO oxidation was substantially higher at Alt compared with SL as a result of increased uptake and subsequent oxidation of lactate by the active skeletal muscles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN] 1. One to five weeks of chronic exposure to hypoxia has been shown to reduce peak blood lactate concentration compared to acute exposure to hypoxia during exercise, the high altitude 'lactate paradox'. However, we hypothesize that a sufficiently long exposure to hypoxia would result in a blood lactate and net lactate release from the active leg to an extent similar to that observed in acute hypoxia, independent of work intensity. 2. Six Danish lowlanders (25-26 years) were studied during graded incremental bicycle exercise under four conditions: at sea level breathing either ambient air (0 m normoxia) or a low-oxygen gas mixture (10 % O(2) in N(2), 0 m acute hypoxia) and after 9 weeks of acclimatization to 5260 m breathing either ambient air (5260 m chronic hypoxia) or a normoxic gas mixture (47 % O(2) in N(2), 5260 m acute normoxia). In addition, one-leg knee-extensor exercise was performed during 5260 m chronic hypoxia and 5260 m acute normoxia. 3. During incremental bicycle exercise, the arterial lactate concentrations were similar at sub-maximal work at 0 m acute hypoxia and 5260 m chronic hypoxia but higher compared to both 0 m normoxia and 5260 m acute normoxia. However, peak lactate concentration was similar under all conditions (10.0 +/- 1.3, 10.7 +/- 2.0, 10.9 +/- 2.3 and 11.0 +/- 1.0 mmol l(-1)) at 0 m normoxia, 0 m acute hypoxia, 5260 m chronic hypoxia and 5260 m acute normoxia, respectively. Despite a similar lactate concentration at sub-maximal and maximal workload, the net lactate release from the leg was lower during 0 m acute hypoxia (peak 8.4 +/- 1.6 mmol min(-1)) than at 5260 m chronic hypoxia (peak 12.8 +/- 2.2 mmol min(-1)). The same was observed for 0 m normoxia (peak 8.9 +/- 2.0 mmol min(-1)) compared to 5260 m acute normoxia (peak 12.6 +/- 3.6 mmol min(-1)). Exercise after acclimatization with a small muscle mass (one-leg knee-extensor) elicited similar lactate concentrations (peak 4.4 +/- 0.2 vs. 3.9 +/- 0.3 mmol l(-1)) and net lactate release (peak 16.4 +/- 1.8 vs. 14.3 mmol l(-1)) from the active leg at 5260 m chronic hypoxia and 5260 m acute normoxia. 4. In conclusion, in lowlanders acclimatized for 9 weeks to an altitude of 5260 m, the arterial lactate concentration was similar at 0 m acute hypoxia and 5260 m chronic hypoxia. The net lactate release from the active leg was higher at 5260 m chronic hypoxia compared to 0 m acute hypoxia, implying an enhanced lactate utilization with prolonged acclimatization to altitude. The present study clearly shows the absence of a lactate paradox in lowlanders sufficiently acclimatized to altitude.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN] We hypothesized that reducing arterial O2 content (CaO2) by lowering the hemoglobin concentration ([Hb]) would result in a higher blood flow, as observed with a low PO2, and maintenance of O2 delivery. Seven young healthy men were studied twice, at rest and during two-legged submaximal and peak dynamic knee extensor exercise in a control condition (mean control [Hb] 144 g/l) and after 1-1.5 liters of whole blood had been withdrawn and replaced with albumin [mean drop in [Hb] 29 g/l (range 19-38 g/l); low [Hb]]. Limb blood flow (LBF) was higher (P < 0.01) with low [Hb] during submaximal exercise (i.e., at 30 W, LBF was 2.5 +/- 0.1 and 3.0 +/- 0.1 l/min for control [Hb] and low [Hb], respectively; P < 0.01), resulting in a maintained O2 delivery and O2 uptake for a given workload. However, at peak exercise, LBF was unaltered (6.5 +/- 0.4 and 6.6 +/- 0.6 l/min for control [Hb] and low [Hb], respectively), which resulted in an 18% reduction in O2 delivery (P < 0.01). This occurred despite peak cardiac output in neither condition reaching >75% of maximal cardiac output (approximately 26 l/min). It is concluded that a low CaO2 induces an elevation in submaximal muscle blood flow and that O2 delivery to contracting muscles is tightly regulated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent statistics have demonstrated that two of the most important causes of failures of the UAVs (Uninhabited Aerial Vehicle) missions are related to the low level of decisional autonomy of vehicles and to the man machine interface. Therefore, a relevant issue is to design a display/controls architecture which allows the efficient interaction between the operator and the remote vehicle and to develop a level of automation which allows the vehicle the decision about change in mission. The research presented in this paper focuses on a modular man-machine interface simulator for the UAV control, which simulates UAV missions, developed to experiment solution to this problem. The main components of the simulator are an advanced interface and a block defined automation, which comprehend an algorithm that implements the level of automation of the system. The simulator has been designed and developed following a user-centred design approach in order to take into account the operator’s needs in the communication with the vehicle. The level of automation has been developed following the supervisory control theory which says that the human became a supervisor who sends high level commands, such as part of mission, target, constraints, in then-rule, while the vehicle receives, comprehends and translates such commands into detailed action such as routes or action on the control system. In order to allow the vehicle to calculate and recalculate the safe and efficient route, in term of distance, time and fuel a 3D planning algorithm has been developed. It is based on considering UASs representative of real world systems as objects moving in a virtual environment (terrain, obstacles, and no fly zones) which replicates the airspace. Original obstacle avoidance strategies have been conceived in order to generate mission planes which are consistent with flight rules and with the vehicle performance constraints. The interface is based on a touch screen, used to send high level commands to the vehicle, and a 3D Virtual Display which provides a stereoscopic and augmented visualization of the complex scenario in which the vehicle operates. Furthermore, it is provided with an audio feedback message generator. Simulation tests have been conducted with pilot trainers to evaluate the reliability of the algorithm and the effectiveness and efficiency of the interface in supporting the operator in the supervision of an UAV mission. Results have revealed that the planning algorithm calculate very efficient routes in few seconds, an adequate level of workload is required to command the vehicle and that the 3D based interface provides the operator with a good sense of presence and enhances his awareness of the mission scenario and of the vehicle under his control.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Large scale wireless adhoc networks of computers, sensors, PDAs etc. (i.e. nodes) are revolutionizing connectivity and leading to a paradigm shift from centralized systems to highly distributed and dynamic environments. An example of adhoc networks are sensor networks, which are usually composed by small units able to sense and transmit to a sink elementary data which are successively processed by an external machine. Recent improvements in the memory and computational power of sensors, together with the reduction of energy consumptions, are rapidly changing the potential of such systems, moving the attention towards datacentric sensor networks. A plethora of routing and data management algorithms have been proposed for the network path discovery ranging from broadcasting/floodingbased approaches to those using global positioning systems (GPS). We studied WGrid, a novel decentralized infrastructure that organizes wireless devices in an adhoc manner, where each node has one or more virtual coordinates through which both message routing and data management occur without reliance on either flooding/broadcasting operations or GPS. The resulting adhoc network does not suffer from the deadend problem, which happens in geographicbased routing when a node is unable to locate a neighbor closer to the destination than itself. WGrid allow multidimensional data management capability since nodes' virtual coordinates can act as a distributed database without needing neither special implementation or reorganization. Any kind of data (both single and multidimensional) can be distributed, stored and managed. We will show how a location service can be easily implemented so that any search is reduced to a simple query, like for any other data type. WGrid has then been extended by adopting a replication methodology. We called the resulting algorithm WRGrid. Just like WGrid, WRGrid acts as a distributed database without needing neither special implementation nor reorganization and any kind of data can be distributed, stored and managed. We have evaluated the benefits of replication on data management, finding out, from experimental results, that it can halve the average number of hops in the network. The direct consequence of this fact are a significant improvement on energy consumption and a workload balancing among sensors (number of messages routed by each node). Finally, thanks to the replications, whose number can be arbitrarily chosen, the resulting sensor network can face sensors disconnections/connections, due to failures of sensors, without data loss. Another extension to {WGrid} is {W*Grid} which extends it by strongly improving network recovery performance from link and/or device failures that may happen due to crashes or battery exhaustion of devices or to temporary obstacles. W*Grid guarantees, by construction, at least two disjoint paths between each couple of nodes. This implies that the recovery in W*Grid occurs without broadcasting transmissions and guaranteeing robustness while drastically reducing the energy consumption. An extensive number of simulations shows the efficiency, robustness and traffic road of resulting networks under several scenarios of device density and of number of coordinates. Performance analysis have been compared to existent algorithms in order to validate the results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ontology design and population -core aspects of semantic technologies- re- cently have become fields of great interest due to the increasing need of domain-specific knowledge bases that can boost the use of Semantic Web. For building such knowledge resources, the state of the art tools for ontology design require a lot of human work. Producing meaningful schemas and populating them with domain-specific data is in fact a very difficult and time-consuming task. Even more if the task consists in modelling knowledge at a web scale. The primary aim of this work is to investigate a novel and flexible method- ology for automatically learning ontology from textual data, lightening the human workload required for conceptualizing domain-specific knowledge and populating an extracted schema with real data, speeding up the whole ontology production process. Here computational linguistics plays a fundamental role, from automati- cally identifying facts from natural language and extracting frame of relations among recognized entities, to producing linked data with which extending existing knowledge bases or creating new ones. In the state of the art, automatic ontology learning systems are mainly based on plain-pipelined linguistics classifiers performing tasks such as Named Entity recognition, Entity resolution, Taxonomy and Relation extraction [11]. These approaches present some weaknesses, specially in capturing struc- tures through which the meaning of complex concepts is expressed [24]. Humans, in fact, tend to organize knowledge in well-defined patterns, which include participant entities and meaningful relations linking entities with each other. In literature, these structures have been called Semantic Frames by Fill- 6 Introduction more [20], or more recently as Knowledge Patterns [23]. Some NLP studies has recently shown the possibility of performing more accurate deep parsing with the ability of logically understanding the structure of discourse [7]. In this work, some of these technologies have been investigated and em- ployed to produce accurate ontology schemas. The long-term goal is to collect large amounts of semantically structured information from the web of crowds, through an automated process, in order to identify and investigate the cognitive patterns used by human to organize their knowledge.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Questa tesi affronta il tema dell'analisi della migrazione verso un ambiente cloud enterprise, con considerazioni sui costi e le performance rispetto agli ambienti di origine

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The surface electrocardiogram (ECG) is an established diagnostic tool for the detection of abnormalities in the electrical activity of the heart. The interest of the ECG, however, extends beyond the diagnostic purpose. In recent years, studies in cognitive psychophysiology have related heart rate variability (HRV) to memory performance and mental workload. The aim of this thesis was to analyze the variability of surface ECG derived rhythms, at two different time scales: the discrete-event time scale, typical of beat-related features (Objective I), and the “continuous” time scale of separated sources in the ECG (Objective II), in selected scenarios relevant to psychophysiological and clinical research, respectively. Objective I) Joint time-frequency and non-linear analysis of HRV was carried out, with the goal of assessing psychophysiological workload (PPW) in response to working memory engaging tasks. Results from fourteen healthy young subjects suggest the potential use of the proposed indices in discriminating PPW levels in response to varying memory-search task difficulty. Objective II) A novel source-cancellation method based on morphology clustering was proposed for the estimation of the atrial wavefront in atrial fibrillation (AF) from body surface potential maps. Strong direct correlation between spectral concentration (SC) of atrial wavefront and temporal variability of the spectral distribution was shown in persistent AF patients, suggesting that with higher SC, shorter observation time is required to collect spectral distribution, from which the fibrillatory rate is estimated. This could be time and cost effective in clinical decision-making. The results held for reduced leads sets, suggesting that a simplified setup could also be considered, further reducing the costs. In designing the methods of this thesis, an online signal processing approach was kept, with the goal of contributing to real-world applicability. An algorithm for automatic assessment of ambulatory ECG quality, and an automatic ECG delineation algorithm were designed and validated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis addresses the formulation of a referee assignment problem for the Italian Volleyball Serie A Championships. The problem has particular constraints such as a referee must be assigned to different teams in a given period of times, and the minimal/maximal level of workload for each referee is obtained by considering cost and profit in the objective function. The problem has been solved through an exact method by using an integer linear programming formulation and a clique based decomposition for improving the computing time. Extensive computational experiments on real-world instances have been performed to determine the effectiveness of the proposed approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Studio della sicurezza stradale nelle zone di transizione: gli interventi sulla sp 610 "selice-montanara"

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Il primo studio ha verificato l'affidabilità del software Polimedicus e gli effetti indotti d'allenamento arobico all’intensità del FatMax. 16 soggetti sovrappeso, di circa 40-55anni, sono stati arruolati e sottoposti a un test incrementale fino a raggiungere un RER di 0,95, e da quel momento il carico è stato aumentato di 1 km/ h ogni minuto fino a esaurimento. Successivamente, è stato verificato se i valori estrapolati dal programma erano quelli che si possono verificare durante a un test a carico costante di 1ora. I soggetti dopo 8 settimane di allenamento hanno fatto un altro test incrementale. Il dati hanno mostrato che Polimedicus non è molto affidabile, soprattutto l'HR. Nel secondo studio è stato sviluppato un nuovo programma, Inca, ed i risultati sono stati confrontati con i dati ottenuti dal primo studio con Polimedicus. I risultati finali hanno mostrato che Inca è più affidabile. Nel terzo studio, abbiamo voluto verificare l'esattezza del calcolo del FatMax con Inca e il test FATmaxwork. 25 soggetti in sovrappeso, tra 40-55 anni, sono stati arruolati e sottoposti al FATmaxwork test. Successivamente, è stato verificato se i valori estrapolati da INCA erano quelli che possono verificarsi durante un carico di prova costante di un'ora. L'analisi ha mostrato una precisione del calcolo della FatMax durante il carico di lavoro. Conclusione: E’ emersa una certa difficoltà nel determinare questo parametro, sia per la variabilità inter-individuale che intra-individuale. In futuro bisognerà migliorare INCA per ottenere protocolli di allenamento ancora più validi.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thermal effects are rapidly gaining importance in nanometer heterogeneous integrated systems. Increased power density, coupled with spatio-temporal variability of chip workload, cause lateral and vertical temperature non-uniformities (variations) in the chip structure. The assumption of an uniform temperature for a large circuit leads to inaccurate determination of key design parameters. To improve design quality, we need precise estimation of temperature at detailed spatial resolution which is very computationally intensive. Consequently, thermal analysis of the designs needs to be done at multiple levels of granularity. To further investigate the flow of chip/package thermal analysis we exploit the Intel Single Chip Cloud Computer (SCC) and propose a methodology for calibration of SCC on-die temperature sensors. We also develop an infrastructure for online monitoring of SCC temperature sensor readings and SCC power consumption. Having the thermal simulation tool in hand, we propose MiMAPT, an approach for analyzing delay, power and temperature in digital integrated circuits. MiMAPT integrates seamlessly into industrial Front-end and Back-end chip design flows. It accounts for temperature non-uniformities and self-heating while performing analysis. Furthermore, we extend the temperature variation aware analysis of designs to 3D MPSoCs with Wide-I/O DRAM. We improve the DRAM refresh power by considering the lateral and vertical temperature variations in the 3D structure and adapting the per-DRAM-bank refresh period accordingly. We develop an advanced virtual platform which models the performance, power, and thermal behavior of a 3D-integrated MPSoC with Wide-I/O DRAMs in detail. Moving towards real-world multi-core heterogeneous SoC designs, a reconfigurable heterogeneous platform (ZYNQ) is exploited to further study the performance and energy efficiency of various CPU-accelerator data sharing methods in heterogeneous hardware architectures. A complete hardware accelerator featuring clusters of OpenRISC CPUs, with dynamic address remapping capability is built and verified on a real hardware.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite the several issues faced in the past, the evolutionary trend of silicon has kept its constant pace. Today an ever increasing number of cores is integrated onto the same die. Unfortunately, the extraordinary performance achievable by the many-core paradigm is limited by several factors. Memory bandwidth limitation, combined with inefficient synchronization mechanisms, can severely overcome the potential computation capabilities. Moreover, the huge HW/SW design space requires accurate and flexible tools to perform architectural explorations and validation of design choices. In this thesis we focus on the aforementioned aspects: a flexible and accurate Virtual Platform has been developed, targeting a reference many-core architecture. Such tool has been used to perform architectural explorations, focusing on instruction caching architecture and hybrid HW/SW synchronization mechanism. Beside architectural implications, another issue of embedded systems is considered: energy efficiency. Near Threshold Computing is a key research area in the Ultra-Low-Power domain, as it promises a tenfold improvement in energy efficiency compared to super-threshold operation and it mitigates thermal bottlenecks. The physical implications of modern deep sub-micron technology are severely limiting performance and reliability of modern designs. Reliability becomes a major obstacle when operating in NTC, especially memory operation becomes unreliable and can compromise system correctness. In the present work a novel hybrid memory architecture is devised to overcome reliability issues and at the same time improve energy efficiency by means of aggressive voltage scaling when allowed by workload requirements. Variability is another great drawback of near-threshold operation. The greatly increased sensitivity to threshold voltage variations in today a major concern for electronic devices. We introduce a variation-tolerant extension of the baseline many-core architecture. By means of micro-architectural knobs and a lightweight runtime control unit, the baseline architecture becomes dynamically tolerant to variations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis deals with heterogeneous architectures in standard workstations. Heterogeneous architectures represent an appealing alternative to traditional supercomputers because they are based on commodity components fabricated in large quantities. Hence their price-performance ratio is unparalleled in the world of high performance computing (HPC). In particular, different aspects related to the performance and consumption of heterogeneous architectures have been explored. The thesis initially focuses on an efficient implementation of a parallel application, where the execution time is dominated by an high number of floating point instructions. Then the thesis touches the central problem of efficient management of power peaks in heterogeneous computing systems. Finally it discusses a memory-bounded problem, where the execution time is dominated by the memory latency. Specifically, the following main contributions have been carried out: A novel framework for the design and analysis of solar field for Central Receiver Systems (CRS) has been developed. The implementation based on desktop workstation equipped with multiple Graphics Processing Units (GPUs) is motivated by the need to have an accurate and fast simulation environment for studying mirror imperfection and non-planar geometries. Secondly, a power-aware scheduling algorithm on heterogeneous CPU-GPU architectures, based on an efficient distribution of the computing workload to the resources, has been realized. The scheduler manages the resources of several computing nodes with a view to reducing the peak power. The two main contributions of this work follow: the approach reduces the supply cost due to high peak power whilst having negligible impact on the parallelism of computational nodes. from another point of view the developed model allows designer to increase the number of cores without increasing the capacity of the power supply unit. Finally, an implementation for efficient graph exploration on reconfigurable architectures is presented. The purpose is to accelerate graph exploration, reducing the number of random memory accesses.