996 resultados para deep processing
Resumo:
Since 1986, several near-vertical seismic reflection profiles have been recorded in Switzerland in order to map the deep geologic structure of the Alps. One objective of this endeavour has been to determine the geometries of the autochthonous basement and of the external crystalline massifs, important elements for understanding the geodynamics of the Alpine orogeny. The PNR-20 seismic line W1, located in the Rawil depression of the western Swiss Alps, provides important information on this subject. It extends northward from the `'Penninic front'' across the Helvetic nappes to the Prealps. The crystalline massifs do not outcrop along this profile. Thus, the interpretation of `'near-basement'' reflections has to be constrained by down-dip projections of surface geology, `'true amplitude'' processing, rock physical property studies and modelling. 3-D seismic modelling has been used to evaluate the seismic response of two alternative down-dip projection models. To constrain the interpretation in the southern part of the profile, `'true amplitude'' processing has provided information on the strength of the reflections. Density and velocity measurements on core samples collected up-dip from the region of the seismic line have been used to evaluate reflection coefficients of typical lithologic boundaries in the region. The cover-basement contact itself is not a source of strong reflections, but strong reflections arise from within the overlaying metasedimentary cover sequence, allowing the geometry of the top of the basement to be determined on the basis of `'near-basement'' reflections. The front of the external crystalline massifs is shown to extend beneath the Prealps, about 6 km north of the expected position. A 2-D model whose seismic response shows reflection patterns very similar to the observed is proposed.
Resumo:
AMADEUS is a dexterous subsea robot hand incorporating force and slip contact sensing, using fluid filled tentacles for fingers. Hydraulic pressure variations in each of three flexible tubes (bellows) in each finger create a bending moment, and consequent motion or increase in contact force during grasping. Such fingers have inherent passive compliance, no moving parts, and are naturally depth pressure-compensated, making them ideal for reliable use in the deep ocean. In addition to the mechanical design, development of the hand has also considered closed loop finger position and force control, coordinated finger motion for grasping, force and slip sensor development/signal processing, and reactive world modeling/planning for supervisory `blind grasping¿. Initially, the application focus is for marine science tasks, but broader roles in offshore oil and gas, salvage, and military use are foreseen. Phase I of the project is complete, with the construction of a first prototype. Phase I1 is now underway, to deploy the hand from an underwater robot arm, and carry out wet trials with users.
Resumo:
AMADEUS is a dexterous subsea robot hand incorporating force and slip contact sensing, using fluid filled tentacles for fingers. Hydraulic pressure variations in each of three flexible tubes (bellows) in each finger create a bending moment, and consequent motion or increase in contact force during grasping. Such fingers have inherent passive compliance, no moving parts, and are naturally depth pressure-compensated, making them ideal for reliable use in the deep ocean. In addition to the mechanical design, development of the hand has also considered closed loop finger position and force control, coordinated finger motion for grasping, force and slip sensor development/signal processing, and reactive world modeling/planning for supervisory `blind grasping¿. Initially, the application focus is for marine science tasks, but broader roles in offshore oil and gas, salvage, and military use are foreseen. Phase I of the project is complete, with the construction of a first prototype. Phase I1 is now underway, to deploy the hand from an underwater robot arm, and carry out wet trials with users.
Resumo:
Basal ganglia and brain stem nuclei are involved in the pathophysiology of various neurological and neuropsychiatric disorders. Currently available structural T1-weighted (T1w) magnetic resonance images do not provide sufficient contrast for reliable automated segmentation of various subcortical grey matter structures. We use a novel, semi-quantitative magnetization transfer (MT) imaging protocol that overcomes limitations in T1w images, which are mainly due to their sensitivity to the high iron content in subcortical grey matter. We demonstrate improved automated segmentation of putamen, pallidum, pulvinar and substantia nigra using MT images. A comparison with segmentation of high-quality T1w images was performed in 49 healthy subjects. Our results show that MT maps are highly suitable for automated segmentation, and so for multi-subject morphometric studies with a focus on subcortical structures.
Resumo:
The general aim of the thesis was to study university students’ learning from the perspective of regulation of learning and text processing. The data were collected from the two academic disciplines of medical and teacher education, which share the features of highly scheduled study, a multidisciplinary character, a complex relationship between theory and practice and a professional nature. Contemporary information society poses new challenges for learning, as it is not possible to learn all the information needed in a profession during a study programme. Therefore, it is increasingly important to learn how to think and learn independently, how to recognise gaps in and update one’s knowledge and how to deal with the huge amount of constantly changing information. In other words, it is critical to regulate one’s learning and to process text effectively. The thesis comprises five sub-studies that employed cross-sectional, longitudinal and experimental designs and multiple methods, from surveys to eye tracking. Study I examined the connections between students’ study orientations and the ways they regulate their learning. In total, 410 second-, fourth- and sixth-year medical students from two Finnish medical schools participated in the study by completing a questionnaire measuring both general study orientations and regulation strategies. The students were generally deeply oriented towards their studies. However, they regulated their studying externally. Several interesting and theoretically reasonable connections between the variables were found. For instance, self-regulation was positively correlated with deep orientation and achievement orientation and was negatively correlated with non-commitment. However, external regulation was likewise positively correlated with deep orientation and achievement orientation but also with surface orientation and systematic orientation. It is argued that external regulation might function as an effective coping strategy in the cognitively loaded medical curriculum. Study II focused on medical students’ regulation of learning and their conceptions of the learning environment in an innovative medical course where traditional lectures were combined wth problem-based learning (PBL) group work. First-year medical and dental students (N = 153) completed a questionnaire assessing their regulation strategies of learning and views about the PBL group work. The results indicated that external regulation and self-regulation of the learning content were the most typical regulation strategies among the participants. In line with previous studies, self-regulation wasconnected with study success. Strictly organised PBL sessions were not considered as useful as lectures, although the students’ views of the teacher/tutor and the group were mainly positive. Therefore, developers of teaching methods are challenged to think of new solutions that facilitate reflection of one’s learning and that improve the development of self-regulation. In Study III, a person-centred approach to studying regulation strategies was employed, in contrast to the traditional variable-centred approach used in Study I and Study II. The aim of Study III was to identify different regulation strategy profiles among medical students (N = 162) across time and to examine to what extent these profiles predict study success in preclinical studies. Four regulation strategy profiles were identified, and connections with study success were found. Students with the lowest self-regulation and with an increasing lack of regulation performed worse than the other groups. As the person-centred approach enables us to individualise students with diverse regulation patterns, it could be used in supporting student learning and in facilitating the early diagnosis of learning difficulties. In Study IV, 91 student teachers participated in a pre-test/post-test design where they answered open-ended questions about a complex science concept both before and after reading either a traditional, expository science text or a refutational text that prompted the reader to change his/her beliefs according to scientific beliefs about the phenomenon. The student teachers completed a questionnaire concerning their regulation and processing strategies. The results showed that the students’ understanding improved after text reading intervention and that refutational text promoted understanding better than the traditional text. Additionally, regulation and processing strategies were found to be connected with understanding the science phenomenon. A weak trend showed that weaker learners would benefit more from the refutational text. It seems that learners with effective learning strategies are able to pick out the relevant content regardless of the text type, whereas weaker learners might benefit from refutational parts that contrast the most typical misconceptions with scientific views. The purpose of Study V was to use eye tracking to determine how third-year medical studets (n = 39) and internal medicine residents (n = 13) read and solve patient case texts. The results revealed differences between medical students and residents in processing patient case texts; compared to the students, the residents were more accurate in their diagnoses and processed the texts significantly faster and with a lower number of fixations. Different reading patterns were also found. The observed differences between medical students and residents in processing patient case texts could be used in medical education to model expert reasoning and to teach how a good medical text should be constructed. The main findings of the thesis indicate that even among very selected student populations, such as high-achieving medical students or student teachers, there seems to be a lot of variation in regulation strategies of learning and text processing. As these learning strategies are related to successful studying, students enter educational programmes with rather different chances of managing and achieving success. Further, the ways of engaging in learning seldom centre on a single strategy or approach; rather, students seem to combine several strategies to a certain degree. Sometimes, it can be a matter of perspective of which way of learning can be considered best; therefore, the reality of studying in higher education is often more complicated than the simplistic view of self-regulation as a good quality and external regulation as a harmful quality. The beginning of university studies may be stressful for many, as the gap between high school and university studies is huge and those strategies that were adequate during high school might not work as well in higher education. Therefore, it is important to map students’ learning strategies and to encourage them to engage in using high-quality learning strategies from the beginning. Instead of separate courses on learning skills, the integration of these skills into course contents should be considered. Furthermore, learning complex scientific phenomena could be facilitated by paying attention to high-quality learning materials and texts and other support from the learning environment also in the university. Eye tracking seems to have great potential in evaluating performance and growing diagnostic expertise in text processing, although more research using texts as stimulus is needed. Both medical and teacher education programmes and the professions themselves are challenging in terms of their multidisciplinary nature and increasing amounts of information and therefore require good lifelong learning skills during the study period and later in work life.
Resumo:
Les algorithmes d'apprentissage profond forment un nouvel ensemble de méthodes puissantes pour l'apprentissage automatique. L'idée est de combiner des couches de facteurs latents en hierarchies. Cela requiert souvent un coût computationel plus elevé et augmente aussi le nombre de paramètres du modèle. Ainsi, l'utilisation de ces méthodes sur des problèmes à plus grande échelle demande de réduire leur coût et aussi d'améliorer leur régularisation et leur optimization. Cette thèse adresse cette question sur ces trois perspectives. Nous étudions tout d'abord le problème de réduire le coût de certains algorithmes profonds. Nous proposons deux méthodes pour entrainer des machines de Boltzmann restreintes et des auto-encodeurs débruitants sur des distributions sparses à haute dimension. Ceci est important pour l'application de ces algorithmes pour le traitement de langues naturelles. Ces deux méthodes (Dauphin et al., 2011; Dauphin and Bengio, 2013) utilisent l'échantillonage par importance pour échantilloner l'objectif de ces modèles. Nous observons que cela réduit significativement le temps d'entrainement. L'accéleration atteint 2 ordres de magnitude sur plusieurs bancs d'essai. Deuxièmement, nous introduisont un puissant régularisateur pour les méthodes profondes. Les résultats expérimentaux démontrent qu'un bon régularisateur est crucial pour obtenir de bonnes performances avec des gros réseaux (Hinton et al., 2012). Dans Rifai et al. (2011), nous proposons un nouveau régularisateur qui combine l'apprentissage non-supervisé et la propagation de tangente (Simard et al., 1992). Cette méthode exploite des principes géometriques et permit au moment de la publication d'atteindre des résultats à l'état de l'art. Finalement, nous considérons le problème d'optimiser des surfaces non-convexes à haute dimensionalité comme celle des réseaux de neurones. Tradionellement, l'abondance de minimum locaux était considéré comme la principale difficulté dans ces problèmes. Dans Dauphin et al. (2014a) nous argumentons à partir de résultats en statistique physique, de la théorie des matrices aléatoires, de la théorie des réseaux de neurones et à partir de résultats expérimentaux qu'une difficulté plus profonde provient de la prolifération de points-selle. Dans ce papier nous proposons aussi une nouvelle méthode pour l'optimisation non-convexe.
Resumo:
This thesis Entitled distribution ,diversity and biology of deep-sea fishes the indian Eez.Fishing rights and responsibilities it entails in the deep-sea sector has been a vexed issue since the mid-nineties and various stakeholders have different opinion on the modalities of harnessing the marine fisheries wealth, especially from the oceanic and deeper waters. The exploitation and utilization of these esources requires technology development and upgradation in harvest and post-harvest areas; besides shore infrastructure for berthing, handling, storing and processing facilities. At present, although deep-sea fishes don’t have any ready market in our country it can be converted into value added products. Many problems have so far confronted the deep-sea fishing sector not allowing it to reach its full potential. Hence, there should be a sound deep-sea fishing policy revolving round the upgradation of the capabilities of small scale fishermen, who have the inherent skills but do not have adequate support to develop themselves and to acquire vessels having the capability to operate in farther and deeper waters. Prospects for the commercial exploitation and utilization of deep-sea fishes were analyzed using SWOL analysis.
Resumo:
Air frying is being projected as an alternative to deep fat frying for producing snacks such as French Fries. In air frying, the raw potato sections are essentially heated in hot air containing fine oil droplets, which dehydrates the potato and attempts to impart the characteristics of traditionally produced French fries, but with a substantially lower level of fat absorbed in the product. The aim of this research is to compare: 1) the process dynamics of air frying with conventional deep fat frying under otherwise similar operating conditions, and 2) the products formed by the two processes in terms of color, texture, microstructure, calorimetric properties and sensory characteristics Although, air frying produced products with a substantially lower fat content but with similar moisture contents and color characteristics, it required much longer processing times, typically 21 minutes in relation to 9 minutes in the case of deep fat frying. The slower evolution of temperature also resulted in lower rates of moisture loss and color development reactions. DSC studies revealed that the extent of starch gelatinization was also lower in the case of air fried product. In addition, the two types of frying also resulted in products having significantly different texture and sensory characteristics.
Resumo:
Plasma immersion ion implantation (PIII) process is a three dimensional surface modification method that is quite mature and well known to the surface engineering community nowadays, especially to those working in the field of plasma-materials interaction, aiming at both industrial and academic applications. More recently, deposition methods have been added to PIII, the PIII&D, opening possibilities of broader range of applications of these techniques. So, PIII&D is becoming a routine method of surface modification, with the advantage of pushing up the retained dose levels limited by the sputtering due to ion implantation. Therefore, well adherent, thick, three-dimensional films without stress are possible to be achieved, at relatively low cost, using PIII&D. In this paper, we will discuss about a few PIII and PIII&D experiments that have been performed recently to achieve surface improvements in different materials: 1 - high temperature nitrogen PIII in Ti6Al4V alloy in which a deep nitrogen rich treated layer resulted in surface improvements as increase of hardness, corrosion resistance and resistance to wear of the Ti alloy; 2 - nanostructures in ZnO films, obtained by PIII&D of vaporized & ionized Zn source; 3 - combined implantation and deposition of calcium for biomaterial activity of Ti alloy (PIII&D), allowing the growth of hydroxyapatite in a body solution; 4 - magnetron sputtering deposition of Cr that was enhanced by the glow discharge Ar plasma to allow implantation and deposition of Cr on SAE 1070 steel (PIII&D) resulting in surfaces with high resistance to corrosion; and 5 - implantation of nitrogen by ordinary PIII into this Cr film, which improved resistance to corrosion, while keeping the tribological properties as good as for the SAE 1070 steel surface. © 2012 Elsevier B.V.
Resumo:
Pós-graduação em Engenharia Mecânica - FEG
Resumo:
Ground-based Earth troposphere calibration systems play an important role in planetary exploration, especially to carry out radio science experiments aimed at the estimation of planetary gravity fields. In these experiments, the main observable is the spacecraft (S/C) range rate, measured from the Doppler shift of an electromagnetic wave transmitted from ground, received by the spacecraft and coherently retransmitted back to ground. If the solar corona and interplanetary plasma noise is already removed from Doppler data, the Earth troposphere remains one of the main error sources in tracking observables. Current Earth media calibration systems at NASA’s Deep Space Network (DSN) stations are based upon a combination of weather data and multidirectional, dual frequency GPS measurements acquired at each station complex. In order to support Cassini’s cruise radio science experiments, a new generation of media calibration systems were developed, driven by the need to achieve the goal of an end-to-end Allan deviation of the radio link in the order of 3×〖10〗^(-15) at 1000 s integration time. The future ESA’s Bepi Colombo mission to Mercury carries scientific instrumentation for radio science experiments (a Ka-band transponder and a three-axis accelerometer) which, in combination with the S/C telecommunication system (a X/X/Ka transponder) will provide the most advanced tracking system ever flown on an interplanetary probe. Current error budget for MORE (Mercury Orbiter Radioscience Experiment) allows the residual uncalibrated troposphere to contribute with a value of 8×〖10〗^(-15) to the two-way Allan deviation at 1000 s integration time. The current standard ESA/ESTRACK calibration system is based on a combination of surface meteorological measurements and mathematical algorithms, capable to reconstruct the Earth troposphere path delay, leaving an uncalibrated component of about 1-2% of the total delay. In order to satisfy the stringent MORE requirements, the short time-scale variations of the Earth troposphere water vapor content must be calibrated at ESA deep space antennas (DSA) with more precise and stable instruments (microwave radiometers). In parallel to this high performance instruments, ESA ground stations should be upgraded to media calibration systems at least capable to calibrate both troposphere path delay components (dry and wet) at sub-centimetre level, in order to reduce S/C navigation uncertainties. The natural choice is to provide a continuous troposphere calibration by processing GNSS data acquired at each complex by dual frequency receivers already installed for station location purposes. The work presented here outlines the troposphere calibration technique to support both Deep Space probe navigation and radio science experiments. After an introduction to deep space tracking techniques, observables and error sources, in Chapter 2 the troposphere path delay is widely investigated, reporting the estimation techniques and the state of the art of the ESA and NASA troposphere calibrations. Chapter 3 deals with an analysis of the status and the performances of the NASA Advanced Media Calibration (AMC) system referred to the Cassini data analysis. Chapter 4 describes the current release of a developed GNSS software (S/W) to estimate the troposphere calibration to be used for ESA S/C navigation purposes. During the development phase of the S/W a test campaign has been undertaken in order to evaluate the S/W performances. A description of the campaign and the main results are reported in Chapter 5. Chapter 6 presents a preliminary analysis of microwave radiometers to be used to support radio science experiments. The analysis has been carried out considering radiometric measurements of the ESA/ESTEC instruments installed in Cabauw (NL) and compared with the requirements of MORE. Finally, Chapter 7 summarizes the results obtained and defines some key technical aspects to be evaluated and taken into account for the development phase of future instrumentation.
An Integrated Transmission-Media Noise Calibration Software For Deep-Space Radio Science Experiments
Resumo:
The thesis describes the implementation of a calibration, format-translation and data conditioning software for radiometric tracking data of deep-space spacecraft. All of the available propagation-media noise rejection techniques available as features in the code are covered in their mathematical formulations, performance and software implementations. Some techniques are retrieved from literature and current state of the art, while other algorithms have been conceived ex novo. All of the three typical deep-space refractive environments (solar plasma, ionosphere, troposphere) are dealt with by employing specific subroutines. Specific attention has been reserved to the GNSS-based tropospheric path delay calibration subroutine, since it is the most bulky module of the software suite, in terms of both the sheer number of lines of code, and development time. The software is currently in its final stage of development and once completed will serve as a pre-processing stage for orbit determination codes. Calibration of transmission-media noise sources in radiometric observables proved to be an essential operation to be performed of radiometric data in order to meet the more and more demanding error budget requirements of modern deep-space missions. A completely autonomous and all-around propagation-media calibration software is a novelty in orbit determination, although standalone codes are currently employed by ESA and NASA. The described S/W is planned to be compatible with the current standards for tropospheric noise calibration used by both these agencies like the AMC, TSAC and ESA IFMS weather data, and it natively works with the Tracking Data Message file format (TDM) adopted by CCSDS as standard aimed to promote and simplify inter-agency collaboration.
Resumo:
Recently, global meat market is facing several dramatic changes due to shifting in diet and life style, consumer demands, and economical considerations. Firstly, there was a tremendous increase in the poultry meat demand. Furthermore, current forecast and projection studies pointed out that the expansion of the poultry market will continue in future. In response to this demand, there was a great success to increase growth rate of meat-type chickens in the last few decades in order to optimize the production of poultry meat. Accordingly, the increase of growth rate induced the appearance of several muscle abnormalities such as pale-soft-exudative (PSE) syndrome and deep-pectoral-myopathy (DPM) and more recently white striping and wooden breast. Currently, there is growing interest in meat industry to understand how much the magnitude of the effect of these abnormalities on different quality traits for raw and processed meat. Therefore, the major part of the research activities during the PhD project was dedicated to evaluate the different implications of recent muscle abnormalities such as white striping and wooden breast on meat quality traits and their incidence under commercial conditions. Generally, our results showed that the incidence of these muscle abnormalities was very high under commercial conditions and had great adverse impact on meat quality traits. Secondly, there is growing market share of convenient, healthy, and functional processed meat products. Accordingly, the remaining part of research activities of the PhD project was dedicated to evaluate the possibility to formulate processed meat products with higher perceived healthy profile such as phosphate free-marinated chicken meat and low sodium-marinated rabbit meat products. Overall all findings showed that sodium bicarbonate can be considered as promising component to replace phosphates in meat products, while potassium chloride under certain conditions was successfully used to produce low marinated rabbit meat products.
Resumo:
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.