945 resultados para Surface wave methods
Resumo:
This work is devoted to the problem of reconstructing the basis weight structure at paper web with black{box techniques. The data that is analyzed comes from a real paper machine and is collected by an o®-line scanner. The principal mathematical tool used in this work is Autoregressive Moving Average (ARMA) modelling. When coupled with the Discrete Fourier Transform (DFT), it gives a very flexible and interesting tool for analyzing properties of the paper web. Both ARMA and DFT are independently used to represent the given signal in a simplified version of our algorithm, but the final goal is to combine the two together. Ljung-Box Q-statistic lack-of-fit test combined with the Root Mean Squared Error coefficient gives a tool to separate significant signals from noise.
Resumo:
In this work annealing and growth of CuInS2 thin films is investigated with quasireal-time in situ Raman spectroscopy. During the annealing a shift of the Raman A1 mode towards lower wave numbers with increasing temperature is observed. A linear temperature dependence of the phonon branch of ¿2 cm¿1/100 K is evaluated. The investigation of the growth process (sulfurization of metallic precursors) with high surface sensitivity reveals the occurrence of phases which are not detected with bulk sensitive methods. This allows a detailed insight in the formation of the CuInS2 phases. Independent from stoichiometry and doping of the starting precursors the CuAu ordering of CuInS2 initially forms as the dominating ordering. The transformation of the CuAu ordering into the chalcopyrite one is, in contrast, strongly dependent on the precursor composition and requires high temperatures.
Resumo:
Työn tarkoituksena oli testata jo tutkimuskeskuksella käytössä ollutta ja tutkimuskeskukselle tässä työssä kehitettyä pakkauksen vesihöyrytiiveyteen liittyvää mittausmenetelmää. Saatuja tuloksia verrattiin keskenään sekä materiaalista mitattuihin arvoihin. Elintarvikepakkauksia tutkittiin myös kosteussensoreiden, säilyvyyskokeen sekä kuljetussimuloinnin avulla. Optimoinnilla tutkittiin pakkauksen muodon vaikutusta vesihöyrytiiveyteen. Pakkauksen vesihöyrynläpäisyn mittaamiseen kehitetty menetelmä toimi hyvin ja sen toistettavuus oli hyvä. Verrattaessa sitä jo olemassa olleeseen menetelmään tulokseksi saatiin, että uusi menetelmä oli nopeampi ja vaati vähemmän työaikaa, mutta molemmat menetelmät antoivat hyviä arvoja rinnakkaisille näytteille. Kosteussensoreilla voitiin tutkia tyhjän pakkauksen sisällä olevan kosteuden muutoksia säilytyksen aikana. Säilyvyystesti tehtiin muroilla ja parhaan vesihöyrysuojan antoivat pakkaukset joissa oli alumiinilaminaatti- tai metalloitu OPP kerros. Kuljetustestauksen ensimmäisessä testissä pakkauksiin pakattiin muroja ja toisessa testissä nuudeleita. Kuljetussimuloinnilla ei ollutvaikutusta pakkausten sisäpintojen eheyteen eikä siten pakkausten vesihöyrytiiveyteen. Optimoinnilla vertailtiin eri muotoisten pakkausten tilavuus/pinta-ala suhdetta ja vesihöyrytiiveyden riippuvuutta pinta-alasta. Optimaalisimmaksi pakkaukseksi saatiin pallo, jonka pinta-ala oli pienin ja materiaalin sallima vesihöyrynläpäisy suurin ja vesihöyrybarrierin määrä pienin.
Resumo:
Résumé: Le développement rapide de nouvelles technologies comme l'imagerie médicale a permis l'expansion des études sur les fonctions cérébrales. Le rôle principal des études fonctionnelles cérébrales est de comparer l'activation neuronale entre différents individus. Dans ce contexte, la variabilité anatomique de la taille et de la forme du cerveau pose un problème majeur. Les méthodes actuelles permettent les comparaisons interindividuelles par la normalisation des cerveaux en utilisant un cerveau standard. Les cerveaux standards les plus utilisés actuellement sont le cerveau de Talairach et le cerveau de l'Institut Neurologique de Montréal (MNI) (SPM99). Les méthodes de recalage qui utilisent le cerveau de Talairach, ou celui de MNI, ne sont pas suffisamment précises pour superposer les parties plus variables d'un cortex cérébral (p.ex., le néocortex ou la zone perisylvienne), ainsi que les régions qui ont une asymétrie très importante entre les deux hémisphères. Le but de ce projet est d'évaluer une nouvelle technique de traitement d'images basée sur le recalage non-rigide et utilisant les repères anatomiques. Tout d'abord, nous devons identifier et extraire les structures anatomiques (les repères anatomiques) dans le cerveau à déformer et celui de référence. La correspondance entre ces deux jeux de repères nous permet de déterminer en 3D la déformation appropriée. Pour les repères anatomiques, nous utilisons six points de contrôle qui sont situés : un sur le gyrus de Heschl, un sur la zone motrice de la main et le dernier sur la fissure sylvienne, bilatéralement. Evaluation de notre programme de recalage est accomplie sur les images d'IRM et d'IRMf de neuf sujets parmi dix-huit qui ont participés dans une étude précédente de Maeder et al. Le résultat sur les images anatomiques, IRM, montre le déplacement des repères anatomiques du cerveau à déformer à la position des repères anatomiques de cerveau de référence. La distance du cerveau à déformer par rapport au cerveau de référence diminue après le recalage. Le recalage des images fonctionnelles, IRMf, ne montre pas de variation significative. Le petit nombre de repères, six points de contrôle, n'est pas suffisant pour produire les modifications des cartes statistiques. Cette thèse ouvre la voie à une nouvelle technique de recalage du cortex cérébral dont la direction principale est le recalage de plusieurs points représentant un sillon cérébral. Abstract : The fast development of new technologies such as digital medical imaging brought to the expansion of brain functional studies. One of the methodolgical key issue in brain functional studies is to compare neuronal activation between individuals. In this context, the great variability of brain size and shape is a major problem. Current methods allow inter-individual comparisions by means of normalisation of subjects' brains in relation to a standard brain. A largerly used standard brains are the proportional grid of Talairach and Tournoux and the Montreal Neurological Insititute standard brain (SPM99). However, there is a lack of more precise methods for the superposition of more variable portions of the cerebral cortex (e.g, neocrotex and perisyvlian zone) and in brain regions highly asymmetric between the two cerebral hemipsheres (e.g. planum termporale). The aim of this thesis is to evaluate a new image processing technique based on non-linear model-based registration. Contrary to the intensity-based, model-based registration uses spatial and not intensitiy information to fit one image to another. We extract identifiable anatomical features (point landmarks) in both deforming and target images and by their correspondence we determine the appropriate deformation in 3D. As landmarks, we use six control points that are situated: one on the Heschl'y Gyrus, one on the motor hand area, and one on the sylvian fissure, bilaterally. The evaluation of this model-based approach is performed on MRI and fMRI images of nine of eighteen subjects participating in the Maeder et al. study. Results on anatomical, i.e. MRI, images, show the mouvement of the deforming brain control points to the location of the reference brain control points. The distance of the deforming brain to the reference brain is smallest after the registration compared to the distance before the registration. Registration of functional images, i.e fMRI, doesn't show a significant variation. The small number of registration landmarks, i.e. six, is obvious not sufficient to produce significant modification on the fMRI statistical maps. This thesis opens the way to a new computation technique for cortex registration in which the main directions will be improvement of the registation algorithm, using not only one point as landmark, but many points, representing one particular sulcus.
Resumo:
In order that the radius and thus ununiform structure of the teeth and otherelectrical and magnetic parts of the machine may be taken into consideration the calculation of an axial flux permanent magnet machine is, conventionally, doneby means of 3D FEM-methods. This calculation procedure, however, requires a lotof time and computer recourses. This study proves that also analytical methods can be applied to perform the calculation successfully. The procedure of the analytical calculation can be summarized into following steps: first the magnet is divided into slices, which makes the calculation for each section individually, and then the parts are submitted to calculation of the final results. It is obvious that using this method can save a lot of designing and calculating time. Thecalculation program is designed to model the magnetic and electrical circuits of surface mounted axial flux permanent magnet synchronous machines in such a way, that it takes into account possible magnetic saturation of the iron parts. Theresult of the calculation is the torque of the motor including the vibrations. The motor geometry and the materials and either the torque or pole angle are defined and the motor can be fed with an arbitrary shape and amplitude of three-phase currents. There are no limits for the size and number of the pole pairs nor for many other factors. The calculation steps and the number of different sections of the magnet are selectable, but the calculation time is strongly depending on this. The results are compared to the measurements of real prototypes. The permanent magnet creates part of the flux in the magnetic circuit. The form and amplitude of the flux density in the air-gap depends on the geometry and material of the magnetic circuit, on the length of the air-gap and remanence flux density of the magnet. Slotting is taken into account by using the Carter factor in the slot opening area. The calculation is simple and fast if the shape of the magnetis a square and has no skew in relation to the stator slots. With a more complicated magnet shape the calculation has to be done in several sections. It is clear that according to the increasing number of sections also the result will become more accurate. In a radial flux motor all sections of the magnets create force with a same radius. In the case of an axial flux motor, each radial section creates force with a different radius and the torque is the sum of these. The magnetic circuit of the motor, consisting of the stator iron, rotor iron, air-gap, magnet and the slot, is modelled with a reluctance net, which considers the saturation of the iron. This means, that several iterations, in which the permeability is updated, has to be done in order to get final results. The motor torque is calculated using the instantaneous linkage flux and stator currents. Flux linkage is called the part of the flux that is created by the permanent magnets and the stator currents passing through the coils in stator teeth. The angle between this flux and the phase currents define the torque created by the magnetic circuit. Due to the winding structure of the stator and in order to limit the leakage flux the slot openings of the stator are normally not made of ferromagnetic material even though, in some cases, semimagnetic slot wedges are used. In the slot opening faces the flux enters the iron almost normally (tangentially with respect to the rotor flux) creating tangential forces in the rotor. This phenomenon iscalled cogging. The flux in the slot opening area on the different sides of theopening and in the different slot openings is not equal and so these forces do not compensate each other. In the calculation it is assumed that the flux entering the left side of the opening is the component left from the geometrical centre of the slot. This torque component together with the torque component calculated using the Lorenz force make the total torque of the motor. It is easy to assume that when all the magnet edges, where the derivative component of the magnet flux density is at its highest, enter the slot openings at the same time, this will have as a result a considerable cogging torque. To reduce the cogging torquethe magnet edges can be shaped so that they are not parallel to the stator slots, which is the common way to solve the problem. In doing so, the edge may be spread along the whole slot pitch and thus also the high derivative component willbe spread to occur equally along the rotation. Besides forming the magnets theymay also be placed somewhat asymmetric on the rotor surface. The asymmetric distribution can be made in many different ways. All the magnets may have a different deflection of the symmetrical centre point or they can be for example shiftedin pairs. There are some factors that limit the deflection. The first is that the magnets cannot overlap. The magnet shape and the relative width compared to the pole define the deflection in this case. The other factor is that a shifting of the poles limits the maximum torque of the motor. If the edges of adjacent magnets are very close to each other the leakage flux from one pole to the other increases reducing thus the air-gap magnetization. The asymmetric model needs some assumptions and simplifications in order to limit the size of the model and calculation time. The reluctance net is made for symmetric distribution. If the magnets are distributed asymmetrically the flux in the different pole pairs will not be exactly the same. Therefore, the assumption that the flux flows from the edges of the model to the next pole pairs, in the calculation model from one edgeto the other, is not correct. If it were wished for that this fact should be considered in multi-pole pair machines, this would mean that all the poles, in other words the whole machine, should be modelled in reluctance net. The error resulting from this wrong assumption is, nevertheless, irrelevant.
Resumo:
Quality inspection and assurance is a veryimportant step when today's products are sold to markets. As products are produced in vast quantities, the interest to automate quality inspection tasks has increased correspondingly. Quality inspection tasks usuallyrequire the detection of deficiencies, defined as irregularities in this thesis. Objects containing regular patterns appear quite frequently on certain industries and science, e.g. half-tone raster patterns in the printing industry, crystal lattice structures in solid state physics and solder joints and components in the electronics industry. In this thesis, the problem of regular patterns and irregularities is described in analytical form and three different detection methods are proposed. All the methods are based on characteristics of Fourier transform to represent regular information compactly. Fourier transform enables the separation of regular and irregular parts of an image but the three methods presented are shown to differ in generality and computational complexity. Need to detect fine and sparse details is common in quality inspection tasks, e.g., locating smallfractures in components in the electronics industry or detecting tearing from paper samples in the printing industry. In this thesis, a general definition of such details is given by defining sufficient statistical properties in the histogram domain. The analytical definition allowsa quantitative comparison of methods designed for detail detection. Based on the definition, the utilisation of existing thresholding methodsis shown to be well motivated. Comparison of thresholding methods shows that minimum error thresholding outperforms other standard methods. The results are successfully applied to a paper printability and runnability inspection setup. Missing dots from a repeating raster pattern are detected from Heliotest strips and small surface defects from IGT picking papers.
Resumo:
Aim of study: To identify species of wood samples based on common names and anatomical analyses of their transversal surfaces (without microscopic preparations). Area of study: Spain and South America Material and methods: The test was carried out on a batch of 15 lumber samples deposited in the Royal Botanical Garden in Madrid, from the expedition by Ruiz and Pavon (1777-1811). The first stage of the methodology is to search and to make a critical analysis of the databases which list common nomenclature along with scientific nomenclature. A geographic filter was then applied to the information resulting from the samples with a more restricted distribution. Finally an anatomical verification was carried out with a pocket microscope with a magnification of x40, equipped with a 50 micrometers resolution scale. Main results: The identification of the wood based exclusively on the common name is not useful due to the high number of alternative possibilities (14 for “naranjo”, 10 for “ébano”, etc.). The common name of one of the samples (“huachapelí mulato”) enabled the geographic origin of the samples to be accurately located to the shipyard area in Guayaquil (Ecuador). Given that Ruiz y Pavon did not travel to Ecuador, the specimens must have been obtained by Tafalla. It was possible to determine correctly 67% of the lumber samples from the batch. In 17% of the cases the methodology did not provide a reliable identification. Research highlights: It was possible to determine correctly 67% of the lumber samples from the batch and their geographic provenance. The identification of the wood based exclusively on the common name is not useful.
Resumo:
Diplomityössä on käsitelty paperin pinnankarkeuden mittausta, joka on keskeisimpiä ongelmia paperimateriaalien tutkimuksessa. Paperiteollisuudessa käytettävät mittausmenetelmät sisältävät monia haittapuolia kuten esimerkiksi epätarkkuus ja yhteensopimattomuus sileiden papereiden mittauksissa, sekä suuret vaatimukset laboratorio-olosuhteille ja menetelmien hitaus. Työssä on tutkittu optiseen sirontaan perustuvia menetelmiä pinnankarkeuden määrittämisessä. Konenäköä ja kuvan-käsittelytekniikoita tutkittiin karkeilla paperipinnoilla. Tutkimuksessa käytetyt algoritmit on tehty Matlab® ohjelmalle. Saadut tulokset osoittavat mahdollisuuden pinnankarkeuden mittaamiseen kuvauksen avulla. Parhaimman tuloksen perinteisen ja kuvausmenetelmän välillä antoi fraktaaliulottuvuuteen perustuva menetelmä.
Resumo:
BACKGROUND: Studies in bipolar disorder (BD) to date are limited in their ability to provide a whole-disease perspective--their scope has generally been confined to a single disease phase and/or a specific treatment. Moreover, most clinical trials have focused on the manic phase of disease, and not on depression, which is associated with the greatest disease burden. There are few longitudinal studies covering both types of patients with BD (I and II) and the whole course of the disease, regardless of patients' symptomatology. Therefore, the Wide AmbispectiVE study of the clinical management and burden of Bipolar Disorder (WAVE-bd) (NCT01062607) aims to provide reliable information on the management of patients with BD in daily clinical practice. It also seeks to determine factors influencing clinical outcomes and resource use in relation to the management of BD. METHODS: WAVE-bd is a multinational, multicentre, non-interventional, longitudinal study. Approximately 3000 patients diagnosed with BD type I or II with at least one mood event in the preceding 12 months were recruited at centres in Austria, Belgium, Brazil, France, Germany, Portugal, Romania, Turkey, Ukraine and Venezuela. Site selection methodology aimed to provide a balanced cross-section of patients cared for by different types of providers of medical aid (e.g. academic hospitals, private practices) in each country. Target recruitment percentages were derived either from scientific publications or from expert panels in each participating country. The minimum follow-up period will be 12 months, with a maximum of 27 months, taking into account the retrospective and the prospective parts of the study. Data on demographics, diagnosis, medical history, clinical management, clinical and functional outcomes (CGI-BP and FAST scales), adherence to treatment (DAI-10 scale and Medication Possession Ratio), quality of life (EQ-5D scale), healthcare resources, and caregiver burden (BAS scale) will be collected. Descriptive analysis with common statistics will be performed. DISCUSSION: This study will provide detailed descriptions of the management of BD in different countries, particularly in terms of clinical outcomes and resources used. Thus, it should provide psychiatrists with reliable and up-to-date information about those factors associated with different management patterns of BD. TRIAL REGISTRATION NO: ClinicalTrials.gov: NCT01062607.
Resumo:
In order to improve the efficacy and safety of treatments, drug dosage needs to be adjusted to the actual needs of each patient in a truly personalized medicine approach. Key for widespread dosage adjustment is the availability of point-of-care devices able to measure plasma drug concentration in a simple, automated, and cost-effective fashion. In the present work, we introduce and test a portable, palm-sized transmission-localized surface plasmon resonance (T-LSPR) setup, comprised of off-the-shelf components and coupled with DNA-based aptamers specific to the antibiotic tobramycin (467 Da). The core of the T-LSPR setup are aptamer-functionalized gold nanoislands (NIs) deposited on a glass slide covered with fluorine-doped tin oxide (FTO), which acts as a biosensor. The gold NIs exhibit localized plasmon resonance in the visible range matching the sensitivity of the complementary metal oxide semiconductor (CMOS) image sensor employed as a light detector. The combination of gold NIs on the FTO substrate, causing NIs size and pattern irregularity, might reduce the overall sensitivity but confers extremely high stability in high-ionic solutions, allowing it to withstand numerous regeneration cycles without sensing losses. With this rather simple T-LSPR setup, we show real-time label-free detection of tobramycin in buffer, measuring concentrations down to 0.5 μM. We determined an affinity constant of the aptamer-tobramycin pair consistent with the value obtained using a commercial propagating-wave based SPR. Moreover, our label-free system can detect tobramycin in filtered undiluted blood serum, measuring concentrations down to 10 μM with a theoretical detection limit of 3.4 μM. While the association signal of tobramycin onto the aptamer is masked by the serum injection, the quantification of the captured tobramycin is possible during the dissociation phase and leads to a linear calibration curve for the concentrations over the tested range (10-80 μM). The plasmon shift following surface binding is calculated in terms of both plasmon peak location and hue, with the latter allowing faster data elaboration and real-time display of the results. The presented T-LSPR system shows for the first time label-free direct detection and quantification of a small molecule in the complex matrix of filtered undiluted blood serum. Its uncomplicated construction and compact size, together with the remarkable performances, represent a leap forward toward effective point-of-care devices for therapeutic drug concentration monitoring.
Resumo:
Probabilistic inversion methods based on Markov chain Monte Carlo (MCMC) simulation are well suited to quantify parameter and model uncertainty of nonlinear inverse problems. Yet, application of such methods to CPU-intensive forward models can be a daunting task, particularly if the parameter space is high dimensional. Here, we present a 2-D pixel-based MCMC inversion of plane-wave electromagnetic (EM) data. Using synthetic data, we investigate how model parameter uncertainty depends on model structure constraints using different norms of the likelihood function and the model constraints, and study the added benefits of joint inversion of EM and electrical resistivity tomography (ERT) data. Our results demonstrate that model structure constraints are necessary to stabilize the MCMC inversion results of a highly discretized model. These constraints decrease model parameter uncertainty and facilitate model interpretation. A drawback is that these constraints may lead to posterior distributions that do not fully include the true underlying model, because some of its features exhibit a low sensitivity to the EM data, and hence are difficult to resolve. This problem can be partly mitigated if the plane-wave EM data is augmented with ERT observations. The hierarchical Bayesian inverse formulation introduced and used herein is able to successfully recover the probabilistic properties of the measurement data errors and a model regularization weight. Application of the proposed inversion methodology to field data from an aquifer demonstrates that the posterior mean model realization is very similar to that derived from a deterministic inversion with similar model constraints.
Resumo:
This thesis presents experimental studies of rare earth (RE) metal induced structures on Si(100) surfaces. Two divalent RE metal adsorbates, Eu and Yb, are investigated on nominally flat Si(100) and on vicinal, stepped Si(100) substrates. Several experimental methods have been applied, including scanning tunneling microscopy/spectroscopy (STM/STS), low energy electron diffraction (LEED), synchrotron radiation photoelectron spectroscopy (SR-PES), Auger electron spectroscopy (AES), thermal desorption spectroscopy (TDS), and work function change measurements (Δφ). Two stages can be distinguished in the initial growth of the RE/Si interface: the formation of a two-dimensional (2D) adsorbed layer at submonolayer coverage and the growth of a three-dimensional (3D) silicide phase at higher coverage. The 2D phase is studied for both adsorbates in order to discover whether they produce common reconstructions or reconstructions common to the other RE metals. For studies of the 3D phase Yb is chosen due to its ability to crystallize in a hexagonal AlB2 type lattice, which is the structure of RE silicide nanowires, therefore allowing for the possibility of the growth of one-dimensional (1D) wires. It is found that despite their similar electronic configuration, Eu and Yb do not form similar 2D reconstructions on Si(100). Instead, a wealth of 2D structures is observed and atomic models are proposed for the 2×3-type reconstructions. In addition, adsorbate induced modifications on surface morphology and orientational symmetry are observed. The formation of the Yb silicide phase follows the Stranski-Krastanov growth mode. Nanowires with the hexagonal lattice are observed on the flat Si(100) substrate, and moreover, an unexpectedly large variety of growth directions are revealed. On the vicinal substrate the growth of the silicide phase as 3D islands and wires depends drastically on the growth conditions. The conditions under which wires with high aspect ratio and single orientation parallel to the step edges can be formed are demonstrated.
Resumo:
In many industrial applications, accurate and fast surface reconstruction is essential for quality control. Variation in surface finishing parameters, such as surface roughness, can reflect defects in a manufacturing process, non-optimal product operational efficiency, and reduced life expectancy of the product. This thesis considers reconstruction and analysis of high-frequency variation, that is roughness, on planar surfaces. Standard roughness measures in industry are calculated from surface topography. A fast and non-contact method to obtain surface topography is to apply photometric stereo in the estimation of surface gradients and to reconstruct the surface by integrating the gradient fields. Alternatively, visual methods, such as statistical measures, fractal dimension and distance transforms, can be used to characterize surface roughness directly from gray-scale images. In this thesis, the accuracy of distance transforms, statistical measures, and fractal dimension are evaluated in the estimation of surface roughness from gray-scale images and topographies. The results are contrasted to standard industry roughness measures. In distance transforms, the key idea is that distance values calculated along a highly varying surface are greater than distances calculated along a smoother surface. Statistical measures and fractal dimension are common surface roughness measures. In the experiments, skewness and variance of brightness distribution, fractal dimension, and distance transforms exhibited strong linear correlations to standard industry roughness measures. One of the key strengths of photometric stereo method is the acquisition of higher frequency variation of surfaces. In this thesis, the reconstruction of planar high-frequency varying surfaces is studied in the presence of imaging noise and blur. Two Wiener filterbased methods are proposed of which one is optimal in the sense of surface power spectral density given the spectral properties of the imaging noise and blur. Experiments show that the proposed methods preserve the inherent high-frequency variation in the reconstructed surfaces, whereas traditional reconstruction methods typically handle incorrect measurements by smoothing, which dampens the high-frequency variation.
Resumo:
The central goal of food safety policy in the European Union (EU) is to protect consumer health by guaranteeing a high level of food safety throughout the food chain. This goal can in part be achieved by testing foodstuffs for the presence of various chemical and biological hazards. The aim of this study was to facilitate food safety testing by providing rapid and user-friendly methods for the detection of particular food-related hazards. Heterogeneous competitive time-resolved fluoroimmunoassays were developed for the detection of selected veterinary residues, that is coccidiostat residues, in eggs and chicken liver. After a simplified sample preparation procedure, the immunoassays were performed either in manual format with dissociation-enhanced measurement or in automated format with pre-dried assay reagents and surface measurement. Although the assays were primarily designed for screening purposes providing only qualitative results, they could also be used in a quantitative mode. All the developed assays had good performance characteristics enabling reliable screening of samples at concentration levels required by the authorities. A novel polymerase chain reaction (PCR)-based assay system was developed for the detection of Salmonella spp. in food. The sample preparation included a short non-selective pre-enrichment step, after which the target cells were collected with immunomagnetic beads and applied to PCR reaction vessels containing all the reagents required for the assay in dry form. The homogeneous PCR assay was performed with a novel instrument platform, GenomEra™, and the qualitative assay results were automatically interpreted based on end-point time-resolved fluorescence measurements and cut-off values. The assay was validated using various food matrices spiked with sub-lethally injured Salmonella cells at levels of 1-10 colony forming units (CFU)/25 g of food. The main advantage of the system was the exceptionally short time to result; the entire process starting from the pre-enrichment and ending with the PCR result could be completed in eight hours. In conclusion, molecular methods using state-of-the-art assay techniques were developed for food safety testing. The combination of time-resolved fluorescence detection and ready-to-use reagents enabled sensitive assays easily amenable to automation. Consequently, together with the simplified sample preparation, these methods could prove to be applicable in routine testing.
Resumo:
The aim of the work is to study the existing analytical calculation procedures found in literature to calculate the eddy-current losses in surface mounted permanent magnets within PMSM application. The most promising algorithms are implemented with MATLAB software under the dimensional data of LUT prototype machine. In addition finite elements analyze, utilized with help of Flux 2D software from Cedrat Ltd, is applied to calculate the eddy-current losses in permanent magnets. The results obtained from analytical methods are compared with numerical results.