913 resultados para MS-based methods
Resumo:
The widespread misuse of drugs has increased the number of multiresistant bacteria, and this means that tools that can rapidly detect and characterize bacterial response to antibiotics are much needed in the management of infections. Various techniques, such as the resazurin-reduction assays, the mycobacterial growth indicator tube or polymerase chain reaction-based methods, have been used to investigate bacterial metabolism and its response to drugs. However, many are relatively expensive or unable to distinguish between living and dead bacteria. Here we show that the fluctuations of highly sensitive atomic force microscope cantilevers can be used to detect low concentrations of bacteria, characterize their metabolism and quantitatively screen (within minutes) their response to antibiotics. We applied this methodology to Escherichia coli and Staphylococcus aureus, showing that live bacteria produced larger cantilever fluctuations than bacteria exposed to antibiotics. Our preliminary experiments suggest that the fluctuation is associated with bacterial metabolism.
Resumo:
Tässä diplomityössä suunnitellaan yksivaiheisen turbiinin ylisooninen staattori ja alisooninen roottori, tulo-osa ja diffuusori. Työn alussa tarkastellaan aksiaaliturbiinin käyttökohteita ja teoriaa, jonka jälkeen esitetään suunnittelun perustana olevat menetelmät ja periaatteet. Perussuunnittelu tehdään Traupelinmenetelmällä WinAxtu 1.1 suunnitteluohjelmalla ja hyötysuhde arvioidaan lisäksiExcel-pohjaisella laskennalla. Ylisooninen staattori suunnitellaan perussuunnittelun tuloksiin perustuen, soveltamalla karakteristikoiden menetelmää suuttimen laajenevaan osaan ja pinta-alasuhteita suppenevaan osaan. Roottorin keskiviiva piirretään Sahlbergin menetelmällä ja siiven muoto määritetään A3K7 paksuusjakauman sekä tiheän siipihilan muotoilun periaatteita yhdistämällä. Tulo-osa suunnitellaan mahdollisimman jouhevaksi geometriatietojen ja kirjallisuuden esimerkkien mukaisesti. Lopuksi tulo-osaa mallinnetaan CFD-laskennalla. Diffuusori suunnitellaan käyttämällä soveltuvin osin kirjallisuudessa esitettyjätietoja, tulo-osan geometriaa ja CFD-laskentaa. Suunnittelutuloksia verrataan lopuksi kirjallisuudessa esitettyihin tuloksiin ja arvioidaan suunnittelun onnistumista sekä mahdollisia ongelmakohtia.
Resumo:
Convective transport, both pure and combined with diffusion and reaction, can be observed in a wide range of physical and industrial applications, such as heat and mass transfer, crystal growth or biomechanics. The numerical approximation of this class of problemscan present substantial difficulties clue to regions of high gradients (steep fronts) of the solution, where generation of spurious oscillations or smearing should be precluded. This work is devoted to the development of an efficient numerical technique to deal with pure linear convection and convection-dominated problems in the frame-work of convection-diffusion-reaction systems. The particle transport method, developed in this study, is based on using rneshless numerical particles which carry out the solution along the characteristics defining the convective transport. The resolution of steep fronts of the solution is controlled by a special spacial adaptivity procedure. The serni-Lagrangian particle transport method uses an Eulerian fixed grid to represent the solution. In the case of convection-diffusion-reaction problems, the method is combined with diffusion and reaction solvers within an operator splitting approach. To transfer the solution from the particle set onto the grid, a fast monotone projection technique is designed. Our numerical results confirm that the method has a spacial accuracy of the second order and can be faster than typical grid-based methods of the same order; for pure linear convection problems the method demonstrates optimal linear complexity. The method works on structured and unstructured meshes, demonstrating a high-resolution property in the regions of steep fronts of the solution. Moreover, the particle transport method can be successfully used for the numerical simulation of the real-life problems in, for example, chemical engineering.
Resumo:
The parameter setting of a differential evolution algorithm must meet several requirements: efficiency, effectiveness, and reliability. Problems vary. The solution of a particular problem can be represented in different ways. An algorithm most efficient in dealing with a particular representation may be less efficient in dealing with other representations. The development of differential evolution-based methods contributes substantially to research on evolutionary computing and global optimization in general. The objective of this study is to investigatethe differential evolution algorithm, the intelligent adjustment of its controlparameters, and its application. In the thesis, the differential evolution algorithm is first examined using different parameter settings and test functions. Fuzzy control is then employed to make control parameters adaptive based on an optimization process and expert knowledge. The developed algorithms are applied to training radial basis function networks for function approximation with possible variables including centers, widths, and weights of basis functions and both having control parameters kept fixed and adjusted by fuzzy controller. After the influence of control variables on the performance of the differential evolution algorithm was explored, an adaptive version of the differential evolution algorithm was developed and the differential evolution-based radial basis function network training approaches were proposed. Experimental results showed that the performance of the differential evolution algorithm is sensitive to parameter setting, and the best setting was found to be problem dependent. The fuzzy adaptive differential evolution algorithm releases the user load of parameter setting and performs better than those using all fixedparameters. Differential evolution-based approaches are effective for training Gaussian radial basis function networks.
Resumo:
Technological progress has made a huge amount of data available at increasing spatial and spectral resolutions. Therefore, the compression of hyperspectral data is an area of active research. In somefields, the original quality of a hyperspectral image cannot be compromised andin these cases, lossless compression is mandatory. The main goal of this thesisis to provide improved methods for the lossless compression of hyperspectral images. Both prediction- and transform-based methods are studied. Two kinds of prediction based methods are being studied. In the first method the spectra of a hyperspectral image are first clustered and and an optimized linear predictor is calculated for each cluster. In the second prediction method linear prediction coefficients are not fixed but are recalculated for each pixel. A parallel implementation of the above-mentioned linear prediction method is also presented. Also,two transform-based methods are being presented. Vector Quantization (VQ) was used together with a new coding of the residual image. In addition we have developed a new back end for a compression method utilizing Principal Component Analysis (PCA) and Integer Wavelet Transform (IWT). The performance of the compressionmethods are compared to that of other compression methods. The results show that the proposed linear prediction methods outperform the previous methods. In addition, a novel fast exact nearest-neighbor search method is developed. The search method is used to speed up the Linde-Buzo-Gray (LBG) clustering method.
Resumo:
When a bloodstream infection (BSI) is suspected, most of the laboratory results-biochemical and haematologic-are available within the first hours after hospital admission of the patient. This is not the case for diagnostic microbiology, which generally takes a longer time because blood culture, which is to date the reference standard for the documentation of the BSI microbial agents, relies on bacterial or fungal growth. The microbial diagnosis of BSI directly from blood has been proposed to speed the determination of the etiological agent but was limited by the very low number of circulating microbes during these paucibacterial infections. Thanks to recent advances in molecular biology, including the improvement of nucleic acid extraction and amplification, several PCR-based methods for the diagnosis of BSI directly from whole blood have emerged. In the present review, we discuss the advantages and limitations of these new molecular approaches, which at best complement the culture-based diagnosis of BSI.
Resumo:
During evolution, the immune system has diversified to protect the host from the extremely wide array of possible pathogens. Until recently, immune responses were dissected by use of global approaches and bulk tools, averaging responses across samples and potentially missing particular contributions of individual cells. This is a strongly limiting factor, considering that initial immune responses are likely to be triggered by a restricted number of cells at the vanguard of host defenses. The development of novel, single-cell technologies is a major innovation offering great promise for basic and translational immunology with the potential to overcome some of the limitations of traditional research tools, such as polychromatic flow cytometry or microscopy-based methods. At the transcriptional level, much progress has been made in the fields of microfluidics and single-cell RNA sequencing. At the protein level, mass cytometry already allows the analysis of twice as many parameters as flow cytometry. In this review, we explore the basis and outcome of immune-cell diversity, how genetically identical cells become functionally different, and the consequences for the exploration of host-immune defense responses. We will highlight the advantages, trade-offs, and potential pitfalls of emerging, single-cell-based technologies and how they provide unprecedented detail of immune responses.
Resumo:
Diplomityössä esitetään menetelmä populaation monimuotoisuuden mittaamiseen liukulukukoodatuissa evoluutioalgoritmeissa, ja tarkastellaan kokeellisesti sen toimintaa. Evoluutioalgoritmit ovat populaatiopohjaisia menetelmiä, joilla pyritään ratkaisemaan optimointiongelmia. Evoluutioalgoritmeissa populaation monimuotoisuuden hallinta on välttämätöntä, jotta suoritettu haku olisi riittävän luotettavaa ja toisaalta riittävän nopeaa. Monimuotoisuuden mittaaminen on erityisen tarpeellista tutkittaessa evoluutioalgoritmien dynaamista käyttäytymistä. Työssä tarkastellaan haku- ja tavoitefunktioavaruuden monimuotoisuuden mittaamista. Toistaiseksi ei ole ollut olemassa täysin tyydyttäviä monimuotoisuuden mittareita, ja työn tavoitteena on kehittää yleiskäyttöinen menetelmä liukulukukoodattujen evoluutioalgoritmien suhteellisen ja absoluuttisen monimuotoisuuden mittaamiseen hakuavaruudessa. Kehitettyjen mittareiden toimintaa ja käyttökelpoisuutta tarkastellaan kokeellisesti ratkaisemalla optimointiongelmia differentiaalievoluutioalgoritmilla. Toteutettujen mittareiden toiminta perustuu keskihajontojen laskemiseen populaatiosta. Keskihajonnoille suoritetaan skaalaus, joko alkupopulaation tai nykyisen populaation suhteen, riippuen lasketaanko absoluuttista vai suhteellista monimuotoisuutta. Kokeellisessa tarkastelussa havaittiin kehitetyt mittarit toimiviksi ja käyttökelpoisiksi. Tavoitefunktion venyttäminen koordinaattiakseleiden suunnassa ei vaikuta mittarin toimintaan. Myöskään tavoitefunktion kiertäminen koordinaatistossa ei vaikuta mittareiden tuloksiin. Esitetyn menetelmän aikakompleksisuus riippuu lineaarisesti populaation koosta, ja mittarin toiminta on siten nopeaa suuriakin populaatioita käytettäessä. Suhteellinen monimuotoisuus antaa vertailukelpoisia tuloksia riippumatta parametrien lukumäärästä tai populaation koosta.
Resumo:
Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge.
Resumo:
Tutkielman päätavoitteena oli määritellä, mitä on paperitehtaan avaintehtävissä tarvittava, tulos- ja kustannustietoista toimintaa edistävä talousosaaminen. Kirjallisuusanalyysin perusteella muodostettiin malli talousosaamisen rakentumisesta. Mallia testattiin haastattelemalla erään paperitehtaan avaintehtävissä toimivia henkilöitä. Tulosten perusteella muodostettiin lopullinen käsitys paperitehtaassa tarvittavasta talousosaamisesta ja sen kehittämiseksi soveltuvista menetelmistä. Tutkimus osoitti, että talousosaaminen rakentuu sekä työntekijän sisäisten ominaisuuksien että hänen tiedollisen ja taidollisen talousosaamisen yhdistelmästä. Tietotaitotaso näyttäisi jakautuvan useaan eri kerrokseen sen mukaan, miten laajalti tiedot ja taidot ovat organisaatiossa sovellettavissa. Tärkeimmiksi sisäisiksi ominaisuuksiksi muodostuivat vuorovaikutusosaaminen, vastuuntuntoisuus ja ongelmanratkaisukyky. Talousosaamisen vahvistamisessa avainasemaan nousivat vuorovaikutukselliset, erityisesti viestintään liittyvät keinot ja osaamisen hyödyntämistä tukevan työympäristön luominen.
Resumo:
PURPOSE: We conducted a comprehensive review of the design, implementation, and outcome of first-in-human (FIH) trials of monoclonal antibodies (mAbs) to clearly determine early clinical development strategies for this class of compounds. METHODS: We performed a PubMed search using appropriate terms to identify reports of FIH trials of mAbs published in peer-reviewed journals between January 2000 and April 2013. RESULTS: A total of 82 publications describing FIH trials were selected for analysis. Only 27 articles (33%) reported the criteria used for selecting the starting dose (SD). Dose escalation was performed using rule-based methods in 66 trials (80%). The median number of planned dose levels was five (range, two to 13). The median of the ratio between the highest planned dose and the SD was 27 (range, two to 3,333). Although in 56 studies (68%) at least one grade 3 or 4 toxicity event was reported, no dose-limiting toxicity was observed in 47 trials (57%). The highest planned dose was reached in all trials, but the maximum-tolerated dose (MTD) was defined in only 13 studies (16%). The median of the ratio between MTD and SD was eight (range, four to 1,000). The recommended phase II dose was indicated in 34 studies (41%), but in 25 (73%) of these trials, this dose was chosen without considering toxicity as the main selection criterion. CONCLUSION: This literature review highlights the broad design heterogeneity of FIH trials testing mAbs. Because of the limited observed toxicity, the MTD was infrequently reached, and therefore, the recommended phase II dose for subsequent clinical trials was only tentatively defined.
Resumo:
Slab and cluster model spin-polarized calculations have been carried out to study various properties of isolated first-row transition metal atoms adsorbed on the anionic sites of the regular MgO(100) surface. The calculated adsorption energies follow the trend of the metal cohesive energies, indicating that the changes in the metal-support and metal-metal interactions along the series are dominated by atomic properties. In all cases, except for Ni at the generalized gradient approximation level, the number of unpaired electron is maintained as in the isolated metal atom. The energy required to change the atomic state from high to low spin has been computed using the PW91 and B3LYP density-functional-theory-based methods. PW91 fails to predict the proper ground state of V and Ni, but the results for the isolated and adsorbed atom are consistent within the method. B3LYP properly predicts the ground state of all first-row transition atom the high- to low-spin transition considered is comparable to experiment. In all cases, the interaction with the surface results in a reduced high- to low-spin transition energy.
Resumo:
BACKGROUND: Available methods to simulate nucleotide or amino acid data typically use Markov models to simulate each position independently. These approaches are not appropriate to assess the performance of combinatorial and probabilistic methods that look for coevolving positions in nucleotide or amino acid sequences. RESULTS: We have developed a web-based platform that gives a user-friendly access to two phylogenetic-based methods implementing the Coev model: the evaluation of coevolving scores and the simulation of coevolving positions. We have also extended the capabilities of the Coev model to allow for the generalization of the alphabet used in the Markov model, which can now analyse both nucleotide and amino acid data sets. The simulation of coevolving positions is novel and builds upon the developments of the Coev model. It allows user to simulate pairs of dependent nucleotide or amino acid positions. CONCLUSIONS: The main focus of our paper is the new simulation method we present for coevolving positions. The implementation of this method is embedded within the web platform Coev-web that is freely accessible at http://coev.vital-it.ch/, and was tested in most modern web browsers.
Resumo:
Online paper web analysis relies on traversing scanners that criss-cross on top of a rapidly moving paper web. The sensors embedded in the scanners measure many important quality variables of paper, such as basis weight, caliper and porosity. Most of these quantities are varying a lot and the measurements are noisy at many different scales. The zigzagging nature of scanning makes it difficult to separate machine direction (MD) and cross direction (CD) variability from one another. For improving the 2D resolution of the quality variables above, the paper quality control team at the Department of Mathematics and Physics at LUT has implemented efficient Kalman filtering based methods that currently use 2D Fourier series. Fourier series are global and therefore resolve local spatial detail on the paper web rather poorly. The target of the current thesis is to study alternative wavelet based representations as candidates to replace the Fourier basis for a higher resolution spatial reconstruction of these quality variables. The accuracy of wavelet compressed 2D web fields will be compared with corresponding truncated Fourier series based fields.
Resumo:
X-ray medical imaging is increasingly becoming three-dimensional (3-D). The dose to the population and its management are of special concern in computed tomography (CT). Task-based methods with model observers to assess the dose-image quality trade-off are promising tools, but they still need to be validated for real volumetric images. The purpose of the present work is to evaluate anthropomorphic model observers in 3-D detection tasks for low-contrast CT images. We scanned a low-contrast phantom containing four types of signals at three dose levels and used two reconstruction algorithms. We implemented a multislice model observer based on the channelized Hotelling observer (msCHO) with anthropomorphic channels and investigated different internal noise methods. We found a good correlation for all tested model observers. These results suggest that the msCHO can be used as a relevant task-based method to evaluate low-contrast detection for CT and optimize scan protocols to lower dose in an efficient way.