976 resultados para Fast methods
Resumo:
Machine learning comprises a series of techniques for automatic extraction of meaningful information from large collections of noisy data. In many real world applications, data is naturally represented in structured form. Since traditional methods in machine learning deal with vectorial information, they require an a priori form of preprocessing. Among all the learning techniques for dealing with structured data, kernel methods are recognized to have a strong theoretical background and to be effective approaches. They do not require an explicit vectorial representation of the data in terms of features, but rely on a measure of similarity between any pair of objects of a domain, the kernel function. Designing fast and good kernel functions is a challenging problem. In the case of tree structured data two issues become relevant: kernel for trees should not be sparse and should be fast to compute. The sparsity problem arises when, given a dataset and a kernel function, most structures of the dataset are completely dissimilar to one another. In those cases the classifier has too few information for making correct predictions on unseen data. In fact, it tends to produce a discriminating function behaving as the nearest neighbour rule. Sparsity is likely to arise for some standard tree kernel functions, such as the subtree and subset tree kernel, when they are applied to datasets with node labels belonging to a large domain. A second drawback of using tree kernels is the time complexity required both in learning and classification phases. Such a complexity can sometimes prevents the kernel application in scenarios involving large amount of data. This thesis proposes three contributions for resolving the above issues of kernel for trees. A first contribution aims at creating kernel functions which adapt to the statistical properties of the dataset, thus reducing its sparsity with respect to traditional tree kernel functions. Specifically, we propose to encode the input trees by an algorithm able to project the data onto a lower dimensional space with the property that similar structures are mapped similarly. By building kernel functions on the lower dimensional representation, we are able to perform inexact matchings between different inputs in the original space. A second contribution is the proposal of a novel kernel function based on the convolution kernel framework. Convolution kernel measures the similarity of two objects in terms of the similarities of their subparts. Most convolution kernels are based on counting the number of shared substructures, partially discarding information about their position in the original structure. The kernel function we propose is, instead, especially focused on this aspect. A third contribution is devoted at reducing the computational burden related to the calculation of a kernel function between a tree and a forest of trees, which is a typical operation in the classification phase and, for some algorithms, also in the learning phase. We propose a general methodology applicable to convolution kernels. Moreover, we show an instantiation of our technique when kernels such as the subtree and subset tree kernels are employed. In those cases, Direct Acyclic Graphs can be used to compactly represent shared substructures in different trees, thus reducing the computational burden and storage requirements.
Resumo:
This thesis is focused on the development of heteronuclear correlation methods in solid-state NMR spectroscopy, where the spatial dependence of the dipolar coupling is exploited to obtain structural and dynamical information in solids. Quantitative results on dipolar coupling constants are extracted by means of spinning sideband analysis in the indirect dimension of the two-dimensional experiments. The principles of sideband analysis were established and are currently widely used in the group of Prof. Spiess for the special case of homonuclear 1H double-quantum spectroscopy. The generalization of these principles to the heteronuclear case is presented, with special emphasis on naturally abundant 13C-1H systems. For proton spectroscopy in the solid state, line-narrowing is of particular importance, and is here achieved by very-fast sample rotation at the magic angle (MAS), with frequencies up to 35 kHz. Therefore, the heteronuclear dipolar couplings are suppressed and have to be recoupled in order to achieve an efficient excitation of the observed multiple-quantum modes. Heteronuclear recoupling is most straightforwardly accomplished by performing the known REDOR experiment, where pi-pulses are applied every half rotor period. This experiment was modified by the insertion of an additional spectroscopic dimension, such that heteronuclear multiple-quantum experiments can be carried out, which, as shown experimentally and theoretically, closely resemble homonuclear double-quantum experiments. Variants are presented which are well-suited for the recording of high-resolution 13C-1H shift correlation and spinning-sideband spectra, by means of which spatial proximities and quantitative dipolar coupling constants, respectively, of heteronuclear spin pairs can be determined. Spectral editing of 13C spectra is shown to be feasible with these techniques. Moreover, order phenomena and dynamics in columnar mesophases with 13C in natural abundance were investigated. Two further modifications of the REDOR concept allow the correlation of 13C with quadrupolar nuclei, such as 2H. The spectroscopic handling of these nuclei is challenging in that they cover large frequency ranges, and with the new experiments it is shown how the excitation problem can be tackled or circumvented altogether, respectively. As an example, one of the techniques is used for the identification of a yet unknown motional process of the H-bonded protons in the crystalline parts of poly(vinyl alcohol).
Resumo:
This Ph.D. thesis focuses on the investigation of some chemical and sensorial analytical parameters linked to the quality and purity of different categories of oils obtained by olives: extra virgin olive oils, both those that are sold in the large retail trade (supermarkets and discounts) and those directly collected at some Italian mills, and lower-quality oils (refined, lampante and “repaso”). Concurrently with the adoption of traditional and well-known analytical procedures such as gas chromatography and high-performance liquid chromatography, I carried out a set-up of innovative, fast and environmentally-friend methods. For example, I developed some analytical approaches based on Fourier transform medium infrared spectroscopy (FT-MIR) and time domain reflectometry (TDR), coupled with a robust chemometric elaboration of the results. I investigated some other freshness and quality markers that are not included in official parameters (in Italian and European regulations): the adoption of such a full chemical and sensorial analytical plan allowed me to obtain interesting information about the degree of quality of the EVOOs, mostly within the Italian market. Here the range of quality of EVOOs resulted very wide, in terms of sensory attributes, price classes and chemical parameters. Thanks to the collaboration with other Italian and foreign research groups, I carried out several applicative studies, especially focusing on the shelf-life of oils obtained by olives and on the effects of thermal stresses on the quality of the products. I also studied some innovative technological treatments, such as the clarification by using inert gases, as an alternative to the traditional filtration. Moreover, during a three-and-a-half months research stay at the University of Applied Sciences in Zurich, I also carried out a study related to the application of statistical methods for the elaboration of sensory results, obtained thanks to the official Swiss Panel and to some consumer tests.
Resumo:
The increase in aquaculture operations worldwide has provided new opportunities for the transmission of aquatic viruses. The occurrence of viral diseases remains a significant limiting factor in aquaculture production and for the sustainability. The ability to identify quickly the presence/absence of a pathogenic organism in fish would have significant advantages for the aquaculture systems. Several molecular methods have found successful application in fish pathology both for confirmatory diagnosis of overt diseases and for detection of asymptomatic infections. However, a lot of different variants occur among fish host species and virus strains and consequently specific methods need to be developed and optimized for each pathogen and often also for each host species. The first chapter of this PhD thesis presents a complete description of the major viruses that infect fish and provides a relevant information regarding the most common methods and emerging technologies for the molecular diagnosis of viral diseases of fish. The development and application of a real time PCR assay for the detection and quantification of lymphocystivirus was described in the second chapter. It showed to be highly sensitive, specific, reproducible and versatile for the detection and quantitation of lymphocystivirus. The use of this technique can find multiple application such as asymptomatic carrier detection or pathogenesis studies of different LCDV strains. The third chapter, a multiplex RT-PCR (mRT-PCR) assay was developed for the simultaneous detection of viral haemorrhagic septicaemia (VHS), infectious haematopoietic necrosis (IHN), infectious pancreatic necrosis (IPN) and sleeping disease (SD) in a single assay. This method was able to efficiently detect the viral RNA in tissue samples, showing the presence of single infections and co-infections in rainbow trout samples. The mRT-PCR method was revealed to be an accurate and fast method to support traditional diagnostic techniques in the diagnosis of major viral diseases of rainbow trout.
Resumo:
Information is nowadays a key resource: machine learning and data mining techniques have been developed to extract high-level information from great amounts of data. As most data comes in form of unstructured text in natural languages, research on text mining is currently very active and dealing with practical problems. Among these, text categorization deals with the automatic organization of large quantities of documents in priorly defined taxonomies of topic categories, possibly arranged in large hierarchies. In commonly proposed machine learning approaches, classifiers are automatically trained from pre-labeled documents: they can perform very accurate classification, but often require a consistent training set and notable computational effort. Methods for cross-domain text categorization have been proposed, allowing to leverage a set of labeled documents of one domain to classify those of another one. Most methods use advanced statistical techniques, usually involving tuning of parameters. A first contribution presented here is a method based on nearest centroid classification, where profiles of categories are generated from the known domain and then iteratively adapted to the unknown one. Despite being conceptually simple and having easily tuned parameters, this method achieves state-of-the-art accuracy in most benchmark datasets with fast running times. A second, deeper contribution involves the design of a domain-independent model to distinguish the degree and type of relatedness between arbitrary documents and topics, inferred from the different types of semantic relationships between respective representative words, identified by specific search algorithms. The application of this model is tested on both flat and hierarchical text categorization, where it potentially allows the efficient addition of new categories during classification. Results show that classification accuracy still requires improvements, but models generated from one domain are shown to be effectively able to be reused in a different one.
Resumo:
Die Flachwassergleichungen (SWE) sind ein hyperbolisches System von Bilanzgleichungen, die adäquate Approximationen an groß-skalige Strömungen der Ozeane, Flüsse und der Atmosphäre liefern. Dabei werden Masse und Impuls erhalten. Wir unterscheiden zwei charakteristische Geschwindigkeiten: die Advektionsgeschwindigkeit, d.h. die Geschwindigkeit des Massentransports, und die Geschwindigkeit von Schwerewellen, d.h. die Geschwindigkeit der Oberflächenwellen, die Energie und Impuls tragen. Die Froude-Zahl ist eine Kennzahl und ist durch das Verhältnis der Referenzadvektionsgeschwindigkeit zu der Referenzgeschwindigkeit der Schwerewellen gegeben. Für die oben genannten Anwendungen ist sie typischerweise sehr klein, z.B. 0.01. Zeit-explizite Finite-Volume-Verfahren werden am öftersten zur numerischen Berechnung hyperbolischer Bilanzgleichungen benutzt. Daher muss die CFL-Stabilitätsbedingung eingehalten werden und das Zeitinkrement ist ungefähr proportional zu der Froude-Zahl. Deswegen entsteht bei kleinen Froude-Zahlen, etwa kleiner als 0.2, ein hoher Rechenaufwand. Ferner sind die numerischen Lösungen dissipativ. Es ist allgemein bekannt, dass die Lösungen der SWE gegen die Lösungen der Seegleichungen/ Froude-Zahl Null SWE für Froude-Zahl gegen Null konvergieren, falls adäquate Bedingungen erfüllt sind. In diesem Grenzwertprozess ändern die Gleichungen ihren Typ von hyperbolisch zu hyperbolisch.-elliptisch. Ferner kann bei kleinen Froude-Zahlen die Konvergenzordnung sinken oder das numerische Verfahren zusammenbrechen. Insbesondere wurde bei zeit-expliziten Verfahren falsches asymptotisches Verhalten (bzgl. der Froude-Zahl) beobachtet, das diese Effekte verursachen könnte.Ozeanographische und atmosphärische Strömungen sind typischerweise kleine Störungen eines unterliegenden Equilibriumzustandes. Wir möchten, dass numerische Verfahren für Bilanzgleichungen gewisse Equilibriumzustände exakt erhalten, sonst können künstliche Strömungen vom Verfahren erzeugt werden. Daher ist die Quelltermapproximation essentiell. Numerische Verfahren die Equilibriumzustände erhalten heißen ausbalanciert.rnrnIn der vorliegenden Arbeit spalten wir die SWE in einen steifen, linearen und einen nicht-steifen Teil, um die starke Einschränkung der Zeitschritte durch die CFL-Bedingung zu umgehen. Der steife Teil wird implizit und der nicht-steife explizit approximiert. Dazu verwenden wir IMEX (implicit-explicit) Runge-Kutta und IMEX Mehrschritt-Zeitdiskretisierungen. Die Raumdiskretisierung erfolgt mittels der Finite-Volumen-Methode. Der steife Teil wird mit Hilfe von finiter Differenzen oder au eine acht mehrdimensional Art und Weise approximniert. Zur mehrdimensionalen Approximation verwenden wir approximative Evolutionsoperatoren, die alle unendlich viele Informationsausbreitungsrichtungen berücksichtigen. Die expliziten Terme werden mit gewöhnlichen numerischen Flüssen approximiert. Daher erhalten wir eine Stabilitätsbedingung analog zu einer rein advektiven Strömung, d.h. das Zeitinkrement vergrößert um den Faktor Kehrwert der Froude-Zahl. Die in dieser Arbeit hergeleiteten Verfahren sind asymptotisch erhaltend und ausbalanciert. Die asymptotischer Erhaltung stellt sicher, dass numerische Lösung das "korrekte" asymptotische Verhalten bezüglich kleiner Froude-Zahlen besitzt. Wir präsentieren Verfahren erster und zweiter Ordnung. Numerische Resultate bestätigen die Konvergenzordnung, so wie Stabilität, Ausbalanciertheit und die asymptotische Erhaltung. Insbesondere beobachten wir bei machen Verfahren, dass die Konvergenzordnung fast unabhängig von der Froude-Zahl ist.
Resumo:
We derive a new class of iterative schemes for accelerating the convergence of the EM algorithm, by exploiting the connection between fixed point iterations and extrapolation methods. First, we present a general formulation of one-step iterative schemes, which are obtained by cycling with the extrapolation methods. We, then square the one-step schemes to obtain the new class of methods, which we call SQUAREM. Squaring a one-step iterative scheme is simply applying it twice within each cycle of the extrapolation method. Here we focus on the first order or rank-one extrapolation methods for two reasons, (1) simplicity, and (2) computational efficiency. In particular, we study two first order extrapolation methods, the reduced rank extrapolation (RRE1) and minimal polynomial extrapolation (MPE1). The convergence of the new schemes, both one-step and squared, is non-monotonic with respect to the residual norm. The first order one-step and SQUAREM schemes are linearly convergent, like the EM algorithm but they have a faster rate of convergence. We demonstrate, through five different examples, the effectiveness of the first order SQUAREM schemes, SqRRE1 and SqMPE1, in accelerating the EM algorithm. The SQUAREM schemes are also shown to be vastly superior to their one-step counterparts, RRE1 and MPE1, in terms of computational efficiency. The proposed extrapolation schemes can fail due to the numerical problems of stagnation and near breakdown. We have developed a new hybrid iterative scheme that combines the RRE1 and MPE1 schemes in such a manner that it overcomes both stagnation and near breakdown. The squared first order hybrid scheme, SqHyb1, emerges as the iterative scheme of choice based on our numerical experiments. It combines the fast convergence of the SqMPE1, while avoiding near breakdowns, with the stability of SqRRE1, while avoiding stagnations. The SQUAREM methods can be incorporated very easily into an existing EM algorithm. They only require the basic EM step for their implementation and do not require any other auxiliary quantities such as the complete data log likelihood, and its gradient or hessian. They are an attractive option in problems with a very large number of parameters, and in problems where the statistical model is complex, the EM algorithm is slow and each EM step is computationally demanding.
Resumo:
BACKGROUND: This study investigated the role of a negative FAST in the diagnostic and therapeutic algorithm of multiply injured patients with liver or splenic lesions. METHODS: A retrospective analysis of 226 multiply injured patients with liver or splenic lesions treated at Bern University Hospital, Switzerland. RESULTS: FAST failed to detect free fluid or organ lesions in 45 of 226 patients with spleen or liver injuries (sensitivity 80.1%). Overall specificity was 99.5%. The positive and negative predictive values were 99.4% and 83.3%. The overall likelihood ratios for a positive and negative FAST were 160.2 and 0.2. Grade III-V organ lesions were detected more frequently than grade I and II lesions. Without the additional diagnostic accuracy of a CT scan, the mean ISS of the FAST-false-negative patients would be significantly underestimated and 7 previously unsuspected intra-abdominal injuries would have been missed. CONCLUSION: FAST is an expedient tool for the primary assessment of polytraumatized patients to rule out high grade intra-abdominal injuries. However, the low overall diagnostic sensitivity of FAST may lead to underestimated injury patterns and delayed complications may occur. Hence, in hemodynamically stable patients with abdominal trauma, an early CT scan should be considered and one must be aware of the potential shortcomings of a "negative FAST".
Resumo:
BACKGROUND AND PURPOSE: The major goal of acute ischemic stroke treatment is fast and sufficient recanalization. Percutaneous transluminal balloon angioplasty (PTA) and/or placement of a stent might achieve both by compressing the thrombus at the occlusion site. This study assesses the feasibility, recanalization rate, and complications of the 2 techniques in an animal model. MATERIALS AND METHODS: Thirty cranial vessels of 7 swine were occluded by injection of radiopaque thrombi. Fifteen vessel occlusions were treated by PTA alone and 15, by placement of a stent and postdilation. Recanalization was documented immediately after treatment and after 1, 2, and 3 hours. Thromboembolic events and dissections were documented. RESULTS: PTA was significantly faster to perform (mean, 16.6 minutes versus 33.0 minutes for stent placement; P < .001), but the mean recanalization rate after 1 hour was significantly better after stent placement compared with PTA alone (67.5% versus 14.6%, P < .001). Due to the self-expanding force of the stent, vessel diameter further increased with time, whereas the recanalization result after PTA was prone to reocclusion. Besides thromboembolic events related to the passing maneuvers at the occlusion site, no thrombus fragmentation and embolization occurred during balloon inflation or stent deployment. Flow to side branches could also be restored at the occlusion site because it was possible to direct thrombus compression. CONCLUSIONS: Stent placement and postdilation proved to be much more efficient in terms of acute and short-term vessel recanalization compared with PTA alone.
Resumo:
PURPOSE: To determine the feasibility of using a high resolution isotropic three-dimensional (3D) fast T1 mapping sequence for delayed gadolinium-enhanced MRI of cartilage (dGEMRIC) to assess osteoarthritis in the hip. MATERIALS AND METHODS: T1 maps of the hip were acquired using both low and high resolution techniques following the administration of 0.2 mmol/kg Gd-DTPA(2-) in 35 patients. Both T1 maps were generated from two separate spoiled GRE images. The high resolution T1 map was reconstructed in the anatomically equivalent plane as the low resolution map. T1 values from the equivalent anatomic regions containing femoral and acetabular cartilages were measured on the low and high resolution maps and compared using regression analysis. RESULTS: In vivo T1 measurements showed a statistically significant correlation between the low and high resolution acquisitions at 1.5 Tesla (R(2) = 0.958, P < 0.001). These results demonstrate the feasibility of using a fast two-angle T1 mapping (F2T1) sequence with isotropic spatial resolution (0.8 x 0.8 x 0.8 mm) for quantitative assessment of biochemical status in articular cartilage of the hip. CONCLUSION: The high resolution 3D F2T1 sequence provides accurate T1 measurements in femoral and acetabular cartilages of the hip, which enables the biochemical assessment of articular cartilage in any plane through the joint. It is a powerful tool for researchers and clinicians to acquire high resolution data in a reasonable scan time (< 30 min).
Resumo:
INTRODUCTION: Cartilage defects are common pathologies and surgical cartilage repair shows promising results. In its postoperative evaluation, the magnetic resonance observation of cartilage repair tissue (MOCART) score, using different variables to describe the constitution of the cartilage repair tissue and the surrounding structures, is widely used. High-field magnetic resonance imaging (MRI) and 3-dimensional (3D) isotropic sequences may combine ideal preconditions to enhance the diagnostic performance of cartilage imaging.Aim of this study was to introduce an improved 3D MOCART score using the possibilities of an isotropic 3D true fast imaging with steady-state precession (True-FISP) sequence in the postoperative evaluation of patients after matrix-associated autologous chondrocyte transplantation (MACT) as well as to compare the results to the conventional 2D MOCART score using standard MR sequences. MATERIAL AND METHODS: The study had approval by the local ethics commission. One hundred consecutive MR scans in 60 patients at standard follow-up intervals of 1, 3, 6, 12, 24, and 60 months after MACT of the knee joint were prospectively included. The mean follow-up interval of this cross-sectional evaluation was 21.4 +/- 20.6 months; the mean age of the patients was 35.8 +/- 9.4 years. MRI was performed at a 3.0 Tesla unit. All variables of the standard 2D MOCART score where part of the new 3D MOCART score. Furthermore, additional variables and options were included with the aims to use the capabilities of isotropic MRI, to include the results of recent studies, and to adapt to the needs of patients and physician in a clinical routine examination. A proton-density turbo spin-echo sequence, a T2-weighted dual fast spin-echo (dual-FSE) sequence, and a T1-weighted turbo inversion recovery magnitude (TIRM) sequence were used to assess the standard 2D MOCART score; an isotropic 3D-TrueFISP sequence was prepared to evaluate the new 3D MOCART score. All 9 variables of the 2D MOCART score were compared with the corresponding variables obtained by the 3D MOCART score using the Pearson correlation coefficient; additionally the subjective quality and possible artifacts of the MR sequences were analyzed. RESULTS: The correlation between the standard 2D MOCART score and the new 3D MOCART showed for the 8 variables "defect fill," "cartilage interface," "surface," "adhesions," "structure," "signal intensity," "subchondral lamina," and "effusion"-a highly significant (P < 0.001) correlation with a Pearson coefficient between 0.566 and 0.932. The variable "bone marrow edema" correlated significantly (P < 0.05; Pearson coefficient: 0.257). The subjective quality of the 3 standard MR sequences was comparable to the isotropic 3D-TrueFISP sequence. Artifacts were more frequently visible within the 3D-TrueFISP sequence. CONCLUSION: In the clinical routine follow-up after cartilage repair, the 3D MOCART score, assessed by only 1 high-resolution isotropic MR sequence, provides comparable information than the standard 2D MOCART score. Hence, the new 3D MOCART score has the potential to combine the information of the standard 2D MOCART score with the possible advantages of isotropic 3D MRI at high-field. A clear limitation of the 3D-TrueFISP sequence was the high number of artifacts. Future studies have to prove the clinical benefits of a 3D MOCART score.
Resumo:
The European foundry business is a traditional less RTD intensive industry which is dominated by SMEs and which forms a significant part of Europe’s manufacturing industry. The efficient design and manufacturing of cast components and corresponding tooling is a crucial success factor for these companies. To achieve this, information and knowledge around the design, planning and manufacturing of cast components needs to be accessible in a fast and structured way.
Resumo:
OBJECTIVE The aim of this work is to investigate and compare cardiac proton density (PD) weighted fast field echo (FFE) post-mortem magnetic resonance (PMMR) imaging with standard cardiac PMMR imaging (T1-weighted and T2-weighted turbo spin-echo (TSE)), postmortem CT (PMCT) as well as autopsy. MATERIALS AND METHODS Two human cadavers sequentially underwent cardiac PMCT and PMMR imaging (PD-weighted FFE, T1-weighted and T2-weighted TSE) and autopsy. The cardiac PMMR images were compared to each other as well as to PMCT and autopsy findings. RESULTS For the first case, cardiac PMMR exhibited a focal region of low signal in PD-weighted FFE and T2-weighted TSE images, surrounded by a signal intense rim in the T2-weighted images. T1-weighted TSE and PMCT did not appear to identify any focal abnormality. Macroscopic inspection identified a blood clot; histology confirmed this to be a thrombus with an adhering myocardial infarction. In the second case, a myocardial rupture with heart tamponade was identified in all PMMR images, located at the anterior wall of the left ventricle; PMCT excluded additional ruptures. In PD-weighted FFE and T2-weighted TSE images, it occurred hypo-intense, while resulting in small clustered hyper-intense spots in T1-weighted TSE. Autopsy confirmed the PMMR and PMCT findings. CONCLUSIONS Presented initial results have shown PD-weighted FFE to be a valuable imaging sequence in addition to traditional T2-weighted TSE imaging for blood clots and myocardial haemorrhage with clearer contrast between affected and healthy myocardium.