978 resultados para Inverse filtering technique
Resumo:
We propose a modification of the nonlinear digital signal processing technique based on the nonlinear inverse synthesis for the systems with distributed Raman amplification. The proposed path-average approach offers 3 dB performance gain, regardless of the signal power profile.
Resumo:
We show both numerically and experimentally that dispersion management can be realized by manipulating the dispersion of a filter in a passively mode-locked fibre laser. A programmable filter the dispersion of which can be software configured is employed in the laser. Solitons, stretched-pulses, and dissipative solitons can be targeted reliably by controlling the filter transmission function only, while the length of fibres is fixed in the laser. This technique shows remarkable advantages in controlling operation regimes in ultrafast fibre lasers, in contrast to the traditional technique in which dispersion management is achieved by optimizing the relative length of fibres with opposite-sign dispersion. Our versatile ultrafast fibre laser will be attractive for applications requiring different pulse profiles such as in optical signal processing and optical communications.
Resumo:
The authors would like to express their gratitude to organizations and people that supported this research. Piotr Omenzetter’s work within the Lloyd’s Register Foundation Centre for Safety and Reliability Engineering at the University of Aberdeen is supported by Lloyd’s Register Foundation. The Foundation helps to protect life and property by supporting engineering-related education, public engagement and the application of research. Ben Ryder of Aurecon and Graeme Cummings of HEB Construction assisted in obtaining access to the bridge and information for modelling. Luke Williams and Graham Bougen, undergraduate research students, assisted with testing.
Resumo:
This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others.
This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system.
Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity.
Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.
Resumo:
We assess the performance of an inverse Lagrangian dispersion technique for its suitability to quantify leakages from geological storage of CO2. We find the technique is accurate ((QbLS/Q)=0.99, sigma=0.29) when strict meteorological filtering is applied to ensure that Monin-Obukhov Similarity Theory is valid for the periods analysed and when downwind enrichments in tracer gas concentration are 1% or more above background concentration. Because of their respective baseline atmospheric concentrations, this enrichment criterion is less onerous for CH4 than for CO2. Therefore for geologically sequestered gas reservoirs with a significant CH4 component, monitoring CH4 as a surrogate for CO2 leakage could be as much as 10 times more sensitive than monitoring CO2 alone. Additional recommendations for designing a robust atmospheric monitoring strategy for geosequestration include: continuous concentration data; exact inter-calibration of up- and downwind concentration measurements; use of an array of point concentration sensors to maximise the use of spatial information about the leakage plume; and precise isotope ratio measurement to confirm the source of any concentration elevations detected.
Resumo:
This Licentiate Thesis is devoted to the presentation and discussion of some new contributions in applied mathematics directed towards scientific computing in sports engineering. It considers inverse problems of biomechanical simulations with rigid body musculoskeletal systems especially in cross-country skiing. This is a contrast to the main research on cross-country skiing biomechanics, which is based mainly on experimental testing alone. The thesis consists of an introduction and five papers. The introduction motivates the context of the papers and puts them into a more general framework. Two papers (D and E) consider studies of real questions in cross-country skiing, which are modelled and simulated. The results give some interesting indications, concerning these challenging questions, which can be used as a basis for further research. However, the measurements are not accurate enough to give the final answers. Paper C is a simulation study which is more extensive than paper D and E, and is compared to electromyography measurements in the literature. Validation in biomechanical simulations is difficult and reducing mathematical errors is one way of reaching closer to more realistic results. Paper A examines well-posedness for forward dynamics with full muscle dynamics. Moreover, paper B is a technical report which describes the problem formulation and mathematical models and simulation from paper A in more detail. Our new modelling together with the simulations enable new possibilities. This is similar to simulations of applications in other engineering fields, and need in the same way be handled with care in order to achieve reliable results. The results in this thesis indicate that it can be very useful to use mathematical modelling and numerical simulations when describing cross-country skiing biomechanics. Hence, this thesis contributes to the possibility of beginning to use and develop such modelling and simulation techniques also in this context.
Resumo:
The main aim of this work is to develop a methodology to evaluate the characteristics of porous media in filter using the radio-tracing technique. To do this, an experimental prototype filter made up of an acrylic cylinder, vertically mounted and supported on the lower side by a controlled leaking valve was developed. Two filters (spheres of acrylic and silica crystals) were used to check the movement of the water through the porous media using 123I in its MIBG (iodine-123-meta-iodo benzyl-guanidine) form. Further up the filter an instantaneous injection of the substance makes it possible to see the passage of radioactive clouds through the two scintillatory detectors NaI (2x2)” positioned before and immediately after the cylinder with the filtering element (porous media). The are caused by the detectors on the passage of the radioactive cloud are analyzed through statistical functions using the weighted moment method which makes it possible to calculate the Residence-Time (the amount of time the tracer takes to thoroughly pass through the filter) per the equation of dispersion in tubular flow and the one-directional flow of the radiotracer in the porous media.
Resumo:
The conjugate gradient is the most popular optimization method for solving large systems of linear equations. In a system identification problem, for example, where very large impulse response is involved, it is necessary to apply a particular strategy which diminishes the delay, while improving the convergence time. In this paper we propose a new scheme which combines frequency-domain adaptive filtering with a conjugate gradient technique in order to solve a high order multichannel adaptive filter, while being delayless and guaranteeing a very short convergence time.
Resumo:
A main prediction from the zoom lens model for visual attention is that performance is an inverse function of the size of the attended area. The "attention shift paradigm" developed by Sperling and Reeves (1980) was adapted here to study predictions from the zoom lens model. In two experiments two lists of items were simultaneously presented using the rapid serial visual presentation technique. Subjects were to report the first item he/she was able to identify in the series that did not include the target (the letter T) after he/she saw the target. In one condition, subjects knew in which list the target would appear, in another condition, they did not have this knowledge, having to attend to both positions in order to detect the target. The zoom lens model predicts an interaction between this variable and the distance separating the two positions where the lists are presented. In both experiments, this interaction was observed. The results are also discussed as a solution to the apparently contradictory results with regard to the analog movement model.
Resumo:
Le but de cette recherche tente de trouver les causes au taux d'échec et d'abandon élevé en technique de l'informatique. Notre étude a été menée au cégep de Saint-Hyacinthe à l'hiver 2005. Nous avons également étendu notre réflexion à rechercher des solutions à cette problématique, lesquelles nous le croyons, pourraient être appliquées dans d'autres cégeps ou même d'autres techniques. Nous avons voulu proposer des solutions applicables plutôt que d'en faire une liste exhaustive. Pour y arriver, nous avons limité notre champ d'analyse aux domaines suivants: le temps d'étude, le travail rémunéré, la qualité de l'intervention pédagogique et le choc du passage du secondaire au collégial. Nous voulions aussi vérifier si le taux de diplomation en technique de l'informatique au cégep de Saint-Hyacinthe que nous évaluions à 35% a évolué depuis 1994 à aujourd'hui. Nous avons également cherché à établir un lien entre le temps d'étude et le temps occupé à un travail rémunéré. De là nous avons essayé de trouver la corrélation entre le temps d'étude et la réussite scolaire. Finalement, notre dernier objectif était d'interroger les intervenants des différents niveaux afin de recueillir les solutions qu'ils proposaient à la problématique soulevée. De plus, nous avons interrogé par questionnaire tous les étudiants du programme pour jauger leur niveau de satisfaction. Nous avons divisé cette étude en quatre chapitres dont le premier définit la problématique. Dans ce chapitre, notre intention était de recenser les principaux problèmes d'échec et d'abandon relevés en informatique au cégep de Saint-Hyacinthe et d'y suggérer des solutions. Le second chapitre consiste en une recension d'écrits permettant de renforcer notre réflexion à l'aide de références provenant de chercheurs renommés. Le troisième chapitre relate la méthodologie employée pour recueillir les données et propos des répondants à cette étude. Le quatrième chapitre fait état de la collecte de données effectuée à l'hiver 2005 et qui a consisté en des questionnaires et des entrevues dirigées; dans ce même chapitre, les données sont présentées, analysées et synthétisées à l'aide de graphiques et de tableaux. Tout près de 90 répondants ont été interrogés que ce soit en entrevue ou par questionnaire. De plus au-delà de 110 tableaux statistiques provenant du Service régional d'admission du Montréal métropolitain (SRAM) ont servi de base à cette étude. Finalement, en guise de conclusion, cette étude nous a permis de présenter une synthèse de l'ensemble du travail effectué au cours de la présente recherche. Voici en résumé les résultats de notre recherche. Notre analyse des données statistiques nous a permis d'établir un portrait de l'étudiant moyen en technique de l'informatique au cégep de Saint-Hyacinthe. C'est un garçon qui a environ 18 ans à son inscription au programme, il passe entre 5 et 7 heures à étudier, un peu moins en première mais un peu plus en troisième année. Il occupe un travail rémunéré environ 15 heures par semaine. Il faut préciser que la clientèle du programme est presque exclusivement de sexe masculin. Les professeurs interrogés du département d'informatique ont fait ressortir clairement que le choc du passage du secondaire au collégial est très présent. Les élèves arrivant du secondaire ont souvent l'habitude de réussir sans étudier. Ils vivent souvent leur premier échec scolaire au cégep et se sentent très désemparés face à cette situation. Ils ignorent les outils à leur disposition et n'osent pas demander de l'aide à leurs professeurs. Les différents intervenants consultés nous ont proposé les solutions tels que d'offrir des ateliers ou des cours de prise de notes, de gestion du temps, de gestion des priorités, et finalement de faire connaître les services déjà offerts aux étudiants pour les aider à réussir. Nous pouvons mentionner ici que les élèves du programme n'ont pratiquement pas consulté les services du centre d'aide à la réussite durant la dernière année. Ils attendent souvent trop longtemps avant de demander de l'aide et il ne leur reste souvent plus d'autre choix que d'abandonner le programme. Les professeurs ont aussi le devoir de détecter les étudiants nécessitant de l'aide. Ceux-ci se sentent démunis pour aider les étudiants et ont manifesté avoir besoin d'assistance en ce sens. Comme mentionné précédemment nous avions évalué le taux de diplomation du programme à environ 35 %. L'analyse des statistiques nous a révélé que ce taux affiche une légère progression depuis 1994. À notre surprise, par contre, nous avons constaté que ce taux est légèrement au-dessus du taux moyen des autres collèges de la province (SRAM) et même de celui d'autres programmes majoritairement composés de garçons du cégep de Saint-Hyacinthe (voir le graphique 2, p. 53). Nous avons voulu connaître ce que pensaient nos étudiants ayant abandonné le programme et à l'inverse ceux qui en étaient diplômés. Nos répondants diplômés avaient tous un emploi en informatique et avouaient avoir réussi à force de volonté. Leur principale motivation à terminer leurs études était d'obtenir un emploi intéressant et bien rémunéré. Ils ont fait les travaux demandés car ils les préparaient bien aux examens. Cependant, nos répondants qui ont abandonné le programme nous ont confié qu'un emploi rémunéré occupant trop de temps hebdomadaire et le trop peu de temps d'étude avait contribué à les faire abandonner. Nous avons observé que le temps passé à un travail rémunéré n'influence pas le temps passé à étudier. Par contre, le temps passé à étudier a une répercussion sur la réussite. Nous ajoutons ici que trop de temps passé au travail rémunéré et pas assez aux études favorise l'échec et l'abandon. En conclusion, l'élève qui croit en sa réussite prend les moyens pour l'obtenir. La théorie que nous avons énoncée au début de cet ouvrage spécifiant que seuls les élèves les mieux organisés réussissaient s'avère donc vérifiée, mais nous pouvons malheureusement constater aussi que les élèves les moins organisés abandonnent le programme. Les questionnaires remplis par tous les étudiants du programme nous ont révélé un net manque d'équilibre au niveau du travail exigé lors du passage de la première année à la seconde. Nos entrevues avec les professeurs du programme nous ont confirmé que les élèves trouvaient difficile le passage de la première à la deuxième année. Assiste-on à un report du choc du passage du secondaire au collégial vers le choc du passage de la première à la deuxième année? Aurait-on repoussé le problème en deuxième année en voulant faciliter le passage du secondaire au collégial? Il faudrait bien se garder maintenant de repousser le problème en troisième année, ce serait dommage que ce soit à l'arrivée sur le marché du travail que le choc se produise. Il est donc de première importance que les élèves soient bien préparés à la réalisation des étapes suivantes. Nous ne rendrions pas service à trop faciliter la réussite et que ce soit le marché du travail qui rejette nos étudiants. Enfin voilà pourquoi, après cette mise en garde, six projets seront mis en place afin de favoriser la réussite de nos étudiants tout en conservant une formation de grande qualité qui est la caractéristique du programme de technique de l'informatique du cégep de Saint-Hyacinthe. Voici la liste de ces projets dont vous trouverez une description en consultant la section 3.4 intitulée « Entrevues avec un cadre de la direction des études » : a) implantation du programme Alternance travail-études (ATE), b) la création d'une équipe d'intervention auprès des élèves de première année, c) la création d'un centre d'assistance en technologie de l'information et des communications (TIC), d) l'implantation du tutorat par les pairs, e) la promotion du programme d'informatique et finalement f) l'assistance d'un professeur aux services techniques du département afin de favoriser l'implantation des nouvelles technologies de pointe. Tous ces moyens mis de l'avant permettront, nous l'espérons, de faire en sorte que le programme d'informatique du cégep de Saint-Hyacinthe se démarque grâce à son innovation et à sa volonté de résoudre le faible taux de diplomation tout en offrant une formation de la plus haute qualité.
Resumo:
Nowadays the leukodepletion is one of the most important processes done on the blood in order to reduce the risk of transfusion diseases. It can be performed through different techniques but the most popular one is the filtration due to its simplicity and efficiency. This work aims at improving a current commercial product, by developing a new filter based on Fenton-type reaction to cross-link a hydrogel on to the base material. The filters for leukodepletion are preferably made through the melt flow technique resulting in a non-woven tissue; the functionalization should increase the stability of the filter restricting the extraction of substances to minimum amount when in contact with blood. Through the modification the filters can acquire new properties including wettability, surface charge and good resistance to the extractions. The most important for leukodepletion is the surface charge due to the nature of the filtration process. All the modified samples results have been compared to the commercial product. Three different polymers (A, B and C) have been studied for the filter modifications and every modified filter has been tested in order to determine its properties.
Resumo:
Imaging technologies are widely used in application fields such as natural sciences, engineering, medicine, and life sciences. A broad class of imaging problems reduces to solve ill-posed inverse problems (IPs). Traditional strategies to solve these ill-posed IPs rely on variational regularization methods, which are based on minimization of suitable energies, and make use of knowledge about the image formation model (forward operator) and prior knowledge on the solution, but lack in incorporating knowledge directly from data. On the other hand, the more recent learned approaches can easily learn the intricate statistics of images depending on a large set of data, but do not have a systematic method for incorporating prior knowledge about the image formation model. The main purpose of this thesis is to discuss data-driven image reconstruction methods which combine the benefits of these two different reconstruction strategies for the solution of highly nonlinear ill-posed inverse problems. Mathematical formulation and numerical approaches for image IPs, including linear as well as strongly nonlinear problems are described. More specifically we address the Electrical impedance Tomography (EIT) reconstruction problem by unrolling the regularized Gauss-Newton method and integrating the regularization learned by a data-adaptive neural network. Furthermore we investigate the solution of non-linear ill-posed IPs introducing a deep-PnP framework that integrates the graph convolutional denoiser into the proximal Gauss-Newton method with a practical application to the EIT, a recently introduced promising imaging technique. Efficient algorithms are then applied to the solution of the limited electrods problem in EIT, combining compressive sensing techniques and deep learning strategies. Finally, a transformer-based neural network architecture is adapted to restore the noisy solution of the Computed Tomography problem recovered using the filtered back-projection method.
Resumo:
To evaluate the outcomes in patients treated for humerus distal third fractures with MIPO technique and visualization of the radial nerve by an accessory approach, in those without radial palsy before surgery. The patients were treated with MIPO technique. The visualization and isolation of the radial nerve was done by an approach between the brachialis and the brachiorradialis, with an oblique incision, in the lateral side of the arm. MEPS was used to evaluate the elbow function. Seven patients were evaluated with a mean age of 29.8 years old. The average follow up was 29.85 months. The radial neuropraxis after surgery occurred in three patients. The sensorial recovery occurred after 3.16 months on average and also of the motor function, after 5.33 months on average, in all patients. We achieved fracture consolidation in all patients (M=4.22 months). The averages for flexion-extension and prono-supination were 112.85° and 145°, respectively. The MEPS average score was 86.42. There was no case of infection. This approach allowed excluding a radial nerve interposition on site of the fracture and/or under the plate, showing a high level of consolidation of the fracture and a good evolution of the range of movement of the elbow. Level of Evidence IV, Case Series.
Resumo:
Abstract Objective. The aim of this study was to evaluate the alteration of human enamel bleached with high concentrations of hydrogen peroxide associated with different activators. Materials and methods. Fifty enamel/dentin blocks (4 × 4 mm) were obtained from human third molars and randomized divided according to the bleaching procedure (n = 10): G1 = 35% hydrogen peroxide (HP - Whiteness HP Maxx); G2 = HP + Halogen lamp (HL); G3 = HP + 7% sodium bicarbonate (SB); G4 = HP + 20% sodium hydroxide (SH); and G5 = 38% hydrogen peroxide (OXB - Opalescence Xtra Boost). The bleaching treatments were performed in three sessions with a 7-day interval between them. The enamel content, before (baseline) and after bleaching, was determined using an FT-Raman spectrometer and was based on the concentration of phosphate, carbonate, and organic matrix. Statistical analysis was performed using two-way ANOVA for repeated measures and Tukey's test. Results. The results showed no significant differences between time of analysis (p = 0.5175) for most treatments and peak areas analyzed; and among bleaching treatments (p = 0.4184). The comparisons during and after bleaching revealed a significant difference in the HP group for the peak areas of carbonate and organic matrix, and for the organic matrix in OXB and HP+SH groups. Tukey's analysis determined that the difference, peak areas, and the interaction among treatment, time and peak was statistically significant (p < 0.05). Conclusion. The association of activators with hydrogen peroxide was effective in the alteration of enamel, mainly with regards to the organic matrix.
Resumo:
Context. The possibility of cephalic venous hypertension with the resultant facial edema and elevated cerebrospinal fluid pressure continues to challenge head and neck surgeons who perform bilateral radical neck dissections during simultaneous or staged procedures. Case Report. The staged procedure in patients who require bilateral neck dissections allows collateral venous drainage to develop, mainly through the internal and external vertebral plexuses, thereby minimizing the risks of deleterious consequences. Nevertheless, this procedure has disadvantages, such as a delay in definitive therapy, the need for a second hospitalization and anesthesia, and the risk of cutting lymphatic vessels and spreading viable cancer cells. In this paper, we discuss the rationale and feasibility of preserving the external jugular vein. Considering the limited number of similar reports in the literature, two cases in which this procedure was accomplished are described. The relevant anatomy and technique are reviewed and the patients' outcomes are discussed. Conclusion. Preservation of the EJV during bilateral neck dissections is technically feasible, fast, and safe, with clinically and radiologically demonstrated patency.