878 resultados para Observational techniques and algorithms
Resumo:
This thesis proposes an integrated holistic approach to the study of neuromuscular fatigue in order to encompass all the causes and all the consequences underlying the phenomenon. Starting from the metabolic processes occurring at the cellular level, the reader is guided toward the physiological changes at the motorneuron and motor unit level and from this to the more general biomechanical alterations. In Chapter 1 a list of the various definitions for fatigue spanning several contexts has been reported. In Chapter 2, the electrophysiological changes in terms of motor unit behavior and descending neural drive to the muscle have been studied extensively as well as the biomechanical adaptations induced. In Chapter 3 a study based on the observation of temporal features extracted from sEMG signals has been reported leading to the need of a more robust and reliable indicator during fatiguing tasks. Therefore, in Chapter 4, a novel bi-dimensional parameter is proposed. The study on sEMG-based indicators opened a scenario also on neurophysiological mechanisms underlying fatigue. For this purpose, in Chapter 5, a protocol designed for the analysis of motor unit-related parameters during prolonged fatiguing contractions is presented. In particular, two methodologies have been applied to multichannel sEMG recordings of isometric contractions of the Tibialis Anterior muscle: the state-of-the-art technique for sEMG decomposition and a coherence analysis on MU spike trains. The importance of a multi-scale approach has been finally highlighted in the context of the evaluation of cycling performance, where fatigue is one of the limiting factors. In particular, the last chapter of this thesis can be considered as a paradigm: physiological, metabolic, environmental, psychological and biomechanical factors influence the performance of a cyclist and only when all of these are kept together in a novel integrative way it is possible to derive a clear model and make correct assessments.
Resumo:
Environmental decay in porous masonry materials, such as brick and mortar, is a widespread problem concerning both new and historic masonry structures. The decay mechanisms are quite complex dependng upon several interconnected parameters and from the interaction with the specific micro-climate. Materials undergo aesthetical and substantial changes in character but while many studies have been carried out, the mechanical aspect has been largely understudied while it bears true importance from the structural viewpoint. A quantitative assessment of the masonry material degradation and how it affects the load-bearing capacity of masonry structures appears missing. The research work carried out, limiting the attention to brick masonry addresses this issue through an experimental laboratory approach via different integrated testing procedures, both non-destructive and mechanical, together with monitoring methods. Attention was focused on transport of moisture and salts and on the damaging effects caused by the crystallization of two different salts, sodium chloride and sodium sulphate. Many series of masonry specimens, very different in size and purposes were used to track the damage process since its beginning and to monitor its evolution over a number of years Athe same time suitable testing techniques, non-destructive, mini-invasive, analytical, of monitoring, were validated for these purposes. The specimens were exposed to different aggressive agents (in terms of type of salt, of brine concentration, of artificial vs. open-air natural ageing, …), tested by different means (qualitative vs. quantitative, non destructive vs. mechanical testing, punctual vs. wide areas, …), and had different size (1-, 2-, 3-header thick walls, full-scale walls vs. small size specimens, brick columns and triplets vs. small walls, masonry specimens vs. single units of brick and mortar prisms, …). Different advanced testing methods and novel monitoring techniques were applied in an integrated holistic approach, for quantitative assessment of masonry health state.
Resumo:
The thesis aims to expose the advances achieved in the practices of captive breeding of the European eel (Anguilla anguilla). Aspects investigated concern both approaches livestock (breeding selection, response to hormonal stimulation, reproductive performance, incubation of eggs) and physiological aspects (endocrine plasma profiles of players), as well as engineering aspects. Studies conducted on various populations of wild eel have shown that the main determining factor in the selection of wild females destined to captive breeding must be the Silver Index which may determine the stage of pubertal development. The hormonal induction protocol adopted, with increasing doses of carp pituitary extract, it has proven useful to ovarian development, with a synchronization effect that is positively reflected on egg production. The studies on the effects of photoperiod show how the condition of total darkness can positively influence practices of reproductions in captivity. The effects of photoperiod were also investigated at the physiological level, observing the plasma levels of steroids ( E2, T) and thyroid hormones (T3 and T4) and the expression in the liver of vitellogenin (vtg1 and vtg2) and estradiol membrane receptor (ESR1). From the comparison between spontaneous deposition and insemination techniques through the stripping is inferred as the first ports to a better qualitative and quantitative yield in the production of eggs capable of being fertilized, also the presence of a percentage of oocytes completely transparent can be used to obtain eggs at a good rate of fertility. Finally, the design and implementation of a system for recirculating aquaculture suited to meet the needs of species-specific eel showed how to improve the reproductive results, it would be preferable to adopt low-flow and low density incubation.
Resumo:
Data deduplication describes a class of approaches that reduce the storage capacity needed to store data or the amount of data that has to be transferred over a network. These approaches detect coarse-grained redundancies within a data set, e.g. a file system, and remove them.rnrnOne of the most important applications of data deduplication are backup storage systems where these approaches are able to reduce the storage requirements to a small fraction of the logical backup data size.rnThis thesis introduces multiple new extensions of so-called fingerprinting-based data deduplication. It starts with the presentation of a novel system design, which allows using a cluster of servers to perform exact data deduplication with small chunks in a scalable way.rnrnAfterwards, a combination of compression approaches for an important, but often over- looked, data structure in data deduplication systems, so called block and file recipes, is introduced. Using these compression approaches that exploit unique properties of data deduplication systems, the size of these recipes can be reduced by more than 92% in all investigated data sets. As file recipes can occupy a significant fraction of the overall storage capacity of data deduplication systems, the compression enables significant savings.rnrnA technique to increase the write throughput of data deduplication systems, based on the aforementioned block and file recipes, is introduced next. The novel Block Locality Caching (BLC) uses properties of block and file recipes to overcome the chunk lookup disk bottleneck of data deduplication systems. This chunk lookup disk bottleneck either limits the scalability or the throughput of data deduplication systems. The presented BLC overcomes the disk bottleneck more efficiently than existing approaches. Furthermore, it is shown that it is less prone to aging effects.rnrnFinally, it is investigated if large HPC storage systems inhibit redundancies that can be found by fingerprinting-based data deduplication. Over 3 PB of HPC storage data from different data sets have been analyzed. In most data sets, between 20 and 30% of the data can be classified as redundant. According to these results, future work in HPC storage systems should further investigate how data deduplication can be integrated into future HPC storage systems.rnrnThis thesis presents important novel work in different area of data deduplication re- search.
Resumo:
In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.
Resumo:
The purpose of the study was to examine the effect of teacher experience on student progress and performance quality in an introductory applied lesson. Nine experienced teachers and 15 pre-service teachers taught an adult beginner to play ‘Mary Had a Little Lamb’ on a wind instrument. The lessons were videotaped for subsequent analysis of teaching behaviors and performance achievement. Following instruction, a random sample of teachers was interviewed about their perceptions of the lesson. A panel of adjudicators rated final pupil performances. No significant difference was found between pupils taught by experienced and pre-service teachers in the quality of their final performance. Systematic observation of the videotaped lessons showed that participant teachers provided relatively frequent and highly positive reinforcement during the lessons. Pupils of experienced teachers talked significantly more during the lessons than did pupils of pre-service teachers. Pre-service teachers modeled significantly more on their instruments than did experienced teachers.
Resumo:
Noninvasive blood flow measurements based on Doppler ultrasound studies are the main clinical tool for studying the cardiovascular status of fetuses at risk for circulatory compromise. Usually, qualitative analysis of peripheral arteries and in particular clinical situations such as severe growth restriction or volume overload also of venous vessels close to the heart or of flow patterns in the heart is being used to gauge the level of compensation in a fetus. However, quantitative assessment of the driving force of the fetal circulation, the cardiac output remains an elusive goal in fetal medicine. This article reviews the methods for direct and indirect assessment of cardiac function and explains new clinical applications. Part 1 of this review describes the concept of cardiac function and cardiac output and the techniques that have been used to quantify output. Part 2 summarizes the use of arterial and venous Doppler studies in the fetus and gives a detailed description of indirect measurements of cardiac function (like indices derived from the duration of segments of the cardiac cycle) with current examples of their application.
Resumo:
The design of a high-density neural recording system targeting epilepsy monitoring is presented. Circuit challenges and techniques are discussed to optimize the amplifier topology and the included OTA. A new platform supporting active recording devices targeting wireless and high-resolution focus localization in epilepsy diagnosis is also proposed. The post-layout simulation results of an amplifier dedicated to this application are presented. The amplifier is designed in a UMC 0.18µm CMOS technology, has an NEF of 2.19 and occupies a silicon area of 0.038 mm(2), while consuming 5.8 µW from a 1.8-V supply.
Resumo:
As the performance gap between microprocessors and memory continues to increase, main memory accesses result in long latencies which become a factor limiting system performance. Previous studies show that main memory access streams contain significant localities and SDRAM devices provide parallelism through multiple banks and channels. These locality and parallelism have not been exploited thoroughly by conventional memory controllers. In this thesis, SDRAM address mapping techniques and memory access reordering mechanisms are studied and applied to memory controller design with the goal of reducing observed main memory access latency. The proposed bit-reversal address mapping attempts to distribute main memory accesses evenly in the SDRAM address space to enable bank parallelism. As memory accesses to unique banks are interleaved, the access latencies are partially hidden and therefore reduced. With the consideration of cache conflict misses, bit-reversal address mapping is able to direct potential row conflicts to different banks, further improving the performance. The proposed burst scheduling is a novel access reordering mechanism, which creates bursts by clustering accesses directed to the same rows of the same banks. Subjected to a threshold, reads are allowed to preempt writes and qualified writes are piggybacked at the end of the bursts. A sophisticated access scheduler selects accesses based on priorities and interleaves accesses to maximize the SDRAM data bus utilization. Consequentially burst scheduling reduces row conflict rate, increasing and exploiting the available row locality. Using a revised SimpleScalar and M5 simulator, both techniques are evaluated and compared with existing academic and industrial solutions. With SPEC CPU2000 benchmarks, bit-reversal reduces the execution time by 14% on average over traditional page interleaving address mapping. Burst scheduling also achieves a 15% reduction in execution time over conventional bank in order scheduling. Working constructively together, bit-reversal and burst scheduling successfully achieve a 19% speedup across simulated benchmarks.
Resumo:
Tracking user’s visual attention is a fundamental aspect in novel human-computer interaction paradigms found in Virtual Reality. For example, multimodal interfaces or dialogue-based communications with virtual and real agents greatly benefit from the analysis of the user’s visual attention as a vital source for deictic references or turn-taking signals. Current approaches to determine visual attention rely primarily on monocular eye trackers. Hence they are restricted to the interpretation of two-dimensional fixations relative to a defined area of projection. The study presented in this article compares precision, accuracy and application performance of two binocular eye tracking devices. Two algorithms are compared which derive depth information as required for visual attention-based 3D interfaces. This information is further applied to an improved VR selection task in which a binocular eye tracker and an adaptive neural network algorithm is used during the disambiguation of partly occluded objects.
Resumo:
Deep geological storage of radioactive waste foresees cementitious materials as reinforcement of tunnels and as backfill. Bentonite is proposed to enclose spent fuel drums, and as drift seals. The emplacement of cementitious material next to clay material generates an enormous chemical gradient in pore water composition that drives diffusive solute transport. Laboratory studies and reactive transport modeling predict significant mineral alteration at and near interfaces, mainly resulting in a decrease of porosity in bentonite. The goal of this project is to characterize and quantify the cement/bentonite skin effects spatially and temporally in laboratory experiments. A newly developed mobile X-ray transparent core infiltration device was used, which allows performing X-ray computed tomography (CT) periodically without interrupting a running experiment. A pre-saturated cylindrical MX-80 bentonite sample (1920 kg/m3 average wet density) is subjected to a confining pressure as a constant total pressure boundary condition. The infiltration of a hyperalkaline (pH 13.4), artificial OPC (ordinary Portland cement) pore water into the bentonite plug alters the mineral assemblage over time as an advancing reaction front. The related changes in X-ray attenuation values are related to changes in phase densities, porosity and local bulk density and are tracked over time periodically by non-destructive CT scans.