880 resultados para robust hedging


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a non linear technique to invert strong motion records with the aim of obtaining the final slip and rupture velocity distributions on the fault plane. In this thesis, the ground motion simulation is obtained evaluating the representation integral in the frequency. The Green’s tractions are computed using the discrete wave-number integration technique that provides the full wave-field in a 1D layered propagation medium. The representation integral is computed through a finite elements technique, based on a Delaunay’s triangulation on the fault plane. The rupture velocity is defined on a coarser regular grid and rupture times are computed by integration of the eikonal equation. For the inversion, the slip distribution is parameterized by 2D overlapping Gaussian functions, which can easily relate the spectrum of the possible solutions with the minimum resolvable wavelength, related to source-station distribution and data processing. The inverse problem is solved by a two-step procedure aimed at separating the computation of the rupture velocity from the evaluation of the slip distribution, the latter being a linear problem, when the rupture velocity is fixed. The non-linear step is solved by optimization of an L2 misfit function between synthetic and real seismograms, and solution is searched by the use of the Neighbourhood Algorithm. The conjugate gradient method is used to solve the linear step instead. The developed methodology has been applied to the M7.2, Iwate Nairiku Miyagi, Japan, earthquake. The estimated magnitude seismic moment is 2.6326 dyne∙cm that corresponds to a moment magnitude MW 6.9 while the mean the rupture velocity is 2.0 km/s. A large slip patch extends from the hypocenter to the southern shallow part of the fault plane. A second relatively large slip patch is found in the northern shallow part. Finally, we gave a quantitative estimation of errors associates with the parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

3D video-fluoroscopy is an accurate but cumbersome technique to estimate natural or prosthetic human joint kinematics. This dissertation proposes innovative methodologies to improve the 3D fluoroscopic analysis reliability and usability. Being based on direct radiographic imaging of the joint, and avoiding soft tissue artefact that limits the accuracy of skin marker based techniques, the fluoroscopic analysis has a potential accuracy of the order of mm/deg or better. It can provide fundamental informations for clinical and methodological applications, but, notwithstanding the number of methodological protocols proposed in the literature, time consuming user interaction is exploited to obtain consistent results. The user-dependency prevented a reliable quantification of the actual accuracy and precision of the methods, and, consequently, slowed down the translation to the clinical practice. The objective of the present work was to speed up this process introducing methodological improvements in the analysis. In the thesis, the fluoroscopic analysis was characterized in depth, in order to evaluate its pros and cons, and to provide reliable solutions to overcome its limitations. To this aim, an analytical approach was followed. The major sources of error were isolated with in-silico preliminary studies as: (a) geometric distortion and calibration errors, (b) 2D images and 3D models resolutions, (c) incorrect contour extraction, (d) bone model symmetries, (e) optimization algorithm limitations, (f) user errors. The effect of each criticality was quantified, and verified with an in-vivo preliminary study on the elbow joint. The dominant source of error was identified in the limited extent of the convergence domain for the local optimization algorithms, which forced the user to manually specify the starting pose for the estimating process. To solve this problem, two different approaches were followed: to increase the optimal pose convergence basin, the local approach used sequential alignments of the 6 degrees of freedom in order of sensitivity, or a geometrical feature-based estimation of the initial conditions for the optimization; the global approach used an unsupervised memetic algorithm to optimally explore the search domain. The performances of the technique were evaluated with a series of in-silico studies and validated in-vitro with a phantom based comparison with a radiostereometric gold-standard. The accuracy of the method is joint-dependent, and for the intact knee joint, the new unsupervised algorithm guaranteed a maximum error lower than 0.5 mm for in-plane translations, 10 mm for out-of-plane translation, and of 3 deg for rotations in a mono-planar setup; and lower than 0.5 mm for translations and 1 deg for rotations in a bi-planar setups. The bi-planar setup is best suited when accurate results are needed, such as for methodological research studies. The mono-planar analysis may be enough for clinical application when the analysis time and cost may be an issue. A further reduction of the user interaction was obtained for prosthetic joints kinematics. A mixed region-growing and level-set segmentation method was proposed and halved the analysis time, delegating the computational burden to the machine. In-silico and in-vivo studies demonstrated that the reliability of the new semiautomatic method was comparable to a user defined manual gold-standard. The improved fluoroscopic analysis was finally applied to a first in-vivo methodological study on the foot kinematics. Preliminary evaluations showed that the presented methodology represents a feasible gold-standard for the validation of skin marker based foot kinematics protocols.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bioinformatics, in the last few decades, has played a fundamental role to give sense to the huge amount of data produced. Obtained the complete sequence of a genome, the major problem of knowing as much as possible of its coding regions, is crucial. Protein sequence annotation is challenging and, due to the size of the problem, only computational approaches can provide a feasible solution. As it has been recently pointed out by the Critical Assessment of Function Annotations (CAFA), most accurate methods are those based on the transfer-by-homology approach and the most incisive contribution is given by cross-genome comparisons. In the present thesis it is described a non-hierarchical sequence clustering method for protein automatic large-scale annotation, called “The Bologna Annotation Resource Plus” (BAR+). The method is based on an all-against-all alignment of more than 13 millions protein sequences characterized by a very stringent metric. BAR+ can safely transfer functional features (Gene Ontology and Pfam terms) inside clusters by means of a statistical validation, even in the case of multi-domain proteins. Within BAR+ clusters it is also possible to transfer the three dimensional structure (when a template is available). This is possible by the way of cluster-specific HMM profiles that can be used to calculate reliable template-to-target alignments even in the case of distantly related proteins (sequence identity < 30%). Other BAR+ based applications have been developed during my doctorate including the prediction of Magnesium binding sites in human proteins, the ABC transporters superfamily classification and the functional prediction (GO terms) of the CAFA targets. Remarkably, in the CAFA assessment, BAR+ placed among the ten most accurate methods. At present, as a web server for the functional and structural protein sequence annotation, BAR+ is freely available at http://bar.biocomp.unibo.it/bar2.0.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present thesis focuses on the problem of robust output regulation for minimum phase nonlinear systems by means of identification techniques. Given a controlled plant and an exosystem (an autonomous system that generates eventual references or disturbances), the control goal is to design a proper regulator able to process the only measure available, i.e the error/output variable, in order to make it asymptotically vanishing. In this context, such a regulator can be designed following the well known “internal model principle” that states how it is possible to achieve the regulation objective by embedding a replica of the exosystem model in the controller structure. The main problem shows up when the exosystem model is affected by parametric or structural uncertainties, in this case, it is not possible to reproduce the exact behavior of the exogenous system in the regulator and then, it is not possible to achieve the control goal. In this work, the idea is to find a solution to the problem trying to develop a general framework in which coexist both a standard regulator and an estimator able to guarantee (when possible) the best estimate of all uncertainties present in the exosystem in order to give “robustness” to the overall control loop.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Magnetic Resonance Spectroscopy (MRS) is an advanced clinical and research application which guarantees a specific biochemical and metabolic characterization of tissues by the detection and quantification of key metabolites for diagnosis and disease staging. The "Associazione Italiana di Fisica Medica (AIFM)" has promoted the activity of the "Interconfronto di spettroscopia in RM" working group. The purpose of the study is to compare and analyze results obtained by perfoming MRS on scanners of different manufacturing in order to compile a robust protocol for spectroscopic examinations in clinical routines. This thesis takes part into this project by using the GE Signa HDxt 1.5 T at the Pavillion no. 11 of the S.Orsola-Malpighi hospital in Bologna. The spectral analyses have been performed with the jMRUI package, which includes a wide range of preprocessing and quantification algorithms for signal analysis in the time domain. After the quality assurance on the scanner with standard and innovative methods, both spectra with and without suppression of the water peak have been acquired on the GE test phantom. The comparison of the ratios of the metabolite amplitudes over Creatine computed by the workstation software, which works on the frequencies, and jMRUI shows good agreement, suggesting that quantifications in both domains may lead to consistent results. The characterization of an in-house phantom provided by the working group has achieved its goal of assessing the solution content and the metabolite concentrations with good accuracy. The goodness of the experimental procedure and data analysis has been demonstrated by the correct estimation of the T2 of water, the observed biexponential relaxation curve of Creatine and the correct TE value at which the modulation by J coupling causes the Lactate doublet to be inverted in the spectrum. The work of this thesis has demonstrated that it is possible to perform measurements and establish protocols for data analysis, based on the physical principles of NMR, which are able to provide robust values for the spectral parameters of clinical use.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As tumour specimens and biopsy specimens become smaller, recognition of anatomical structures relevant for staging is increasingly challenging. So far no marker is known that reliably discriminates between muscularis propria (MP) and muscularis mucosae (MM) of the gastrointestinal tract. Recently, smoothelin expression has been shown to differ in MP and MM of the urinary bladder. We aimed to analyse the expression of smoothelin in the gastrointestinal tract in MP and MM in order to define a novel diagnostic tool to identify MM bundles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Among synthetic vaccines, virus-like particles (VLPs) are used for their ability to induce strong humoral responses. Very little is reported on VLP-based-vaccine-induced CD4(+) T-cell responses, despite the requirement of helper T cells for antibody isotype switching. Further knowledge on helper T cells is also needed for optimization of CD8(+) T-cell vaccination. Here, we analysed human CD4(+) T-cell responses to vaccination with MelQbG10, which is a Qβ-VLP covalently linked to a long peptide derived from the melanoma self-antigen Melan-A. In all analysed patients, we found strong antibody responses of mainly IgG1 and IgG3 isotypes, and concomitant Th1-biased CD4(+) T-cell responses specific for Qβ. Although less strong, comparable B- and CD4(+) T-cell responses were also found specific for the Melan-A cargo peptide. Further optimization is required to shift the response more towards the cargo peptide. Nevertheless, the data demonstrate the high potential of VLPs for inducing humoral and cellular immune responses by mounting powerful CD4(+) T-cell help.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Basophil activation tests (BAT) rely on different combinations of basophil selection and activation markers. Whereas activation markers, especially CD63, are widely validated, the most suitable and robust marker for basophil selection is still a matter of debate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a lack of a common concept on how to estimate transmissibility of Chlamydia trachomatis from cross-sectional sexual partnership studies. Using a mathematical model that takes into account the dynamics of chlamydia transmission and sexual partnership formation, we report refined estimates of chlamydia transmissibility in heterosexual partnerships.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Protein scaffolds that support molecular recognition have multiple applications in biotechnology. Thus, protein frames with robust structural cores but adaptable surface loops are in continued demand. Recently, notable progress has been made in the characterization of Ig domains of intracellular origin--in particular, modular components of the titin myofilament. These Ig belong to the I(intermediate)-type, are remarkably stable, highly soluble and undemanding to produce in the cytoplasm of Escherichia coli. Using the Z1 domain from titin as representative, we show that the I-Ig fold tolerates the drastic diversification of its CD loop, constituting an effective peptide display system. We examine the stability of CD-loop-grafted Z1-peptide chimeras using differential scanning fluorimetry, Fourier transform infrared spectroscopy and nuclear magnetic resonance and demonstrate that the introduction of bioreactive affinity binders in this position does not compromise the structural integrity of the domain. Further, the binding efficiency of the exogenous peptide sequences in Z1 is analyzed using pull-down assays and isothermal titration calorimetry. We show that an internally grafted, affinity FLAG tag is functional within the context of the fold, interacting with the anti-FLAG M2 antibody in solution and in affinity gel. Together, these data reveal the potential of the intracellular Ig scaffold for targeted functionalization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conventional liquid liquid extraction (LLE) methods require large volumes of fluids to achieve the desired mass transfer of a solute, which is unsuitable for systems dealing with a low volume or high value product. An alternative to these methods is to scale down the process. Millifluidic devices share many of the benefits of microfluidic systems, including low fluid volumes, increased interfacial area-to-volume ratio, and predictability. A robust millifluidic device was created from acrylic, glass, and aluminum. The channel is lined with a hydrogel cured in the bottom half of the device channel. This hydrogel stabilizes co-current laminar flow of immiscible organic and aqueous phases. Mass transfer of the solute occurs across the interface of these contacting phases. Using a y-junction, an aqueous emulsion is created in an organic phase. The emulsion travels through a length of tubing and then enters the co-current laminar flow device, where the emulsion is broken and each phase can be collected separately. The inclusion of this emulsion formation and separation increases the contact area between the organic and aqueous phases, therefore increasing the area over which mass transfer can occur. Using this design, 95% extraction efficiency was obtained, where 100% is represented by equilibrium. By continuing to explore this LLE process, the process can be optimized and with better understanding may be more accurately modeled. This system has the potential to scale up to the industrial level and provide the efficient extraction required with low fluid volumes and a well-behaved system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mr. Kubon's project was inspired by the growing need for an automatic, syntactic analyser (parser) of Czech, which could be used in the syntactic processing of large amounts of texts. Mr. Kubon notes that such a tool would be very useful, especially in the field of corpus linguistics, where creating a large-scale "tree bank" (a collection of syntactic representations of natural language sentences) is a very important step towards the investigation of the properties of a given language. The work involved in syntactically parsing a whole corpus in order to get a representative set of syntactic structures would be almost inconceivable without the help of some kind of robust (semi)automatic parser. The need for the automatic natural language parser to be robust increases with the size of the linguistic data in the corpus or in any other kind of text which is going to be parsed. Practical experience shows that apart from syntactically correct sentences, there are many sentences which contain a "real" grammatical error. These sentences may be corrected in small-scale texts, but not generally in the whole corpus. In order to be able to complete the overall project, it was necessary to address a number of smaller problems. These were; 1. the adaptation of a suitable formalism able to describe the formal grammar of the system; 2. the definition of the structure of the system's dictionary containing all relevant lexico-syntactic information, and the development of a formal grammar able to robustly parse Czech sentences from the test suite; 3. filling the syntactic dictionary with sample data allowing the system to be tested and debugged during its development (about 1000 words); 4. the development of a set of sample sentences containing a reasonable amount of grammatical and ungrammatical phenomena covering some of the most typical syntactic constructions being used in Czech. Number 3, building a formal grammar, was the main task of the project. The grammar is of course far from complete (Mr. Kubon notes that it is debatable whether any formal grammar describing a natural language may ever be complete), but it covers the most frequent syntactic phenomena, allowing for the representation of a syntactic structure of simple clauses and also the structure of certain types of complex sentences. The stress was not so much on building a wide coverage grammar, but on the description and demonstration of a method. This method uses a similar approach as that of grammar-based grammar checking. The problem of reconstructing the "correct" form of the syntactic representation of a sentence is closely related to the problem of localisation and identification of syntactic errors. Without a precise knowledge of the nature and location of syntactic errors it is not possible to build a reliable estimation of a "correct" syntactic tree. The incremental way of building the grammar used in this project is also an important methodological issue. Experience from previous projects showed that building a grammar by creating a huge block of metarules is more complicated than the incremental method, which begins with the metarules covering most common syntactic phenomena first, and adds less important ones later, especially from the point of view of testing and debugging the grammar. The sample of the syntactic dictionary containing lexico-syntactical information (task 4) now has slightly more than 1000 lexical items representing all classes of words. During the creation of the dictionary it turned out that the task of assigning complete and correct lexico-syntactic information to verbs is a very complicated and time-consuming process which would itself be worth a separate project. The final task undertaken in this project was the development of a method allowing effective testing and debugging of the grammar during the process of its development. The problem of the consistency of new and modified rules of the formal grammar with the rules already existing is one of the crucial problems of every project aiming at the development of a large-scale formal grammar of a natural language. This method allows for the detection of any discrepancy or inconsistency of the grammar with respect to a test-bed of sentences containing all syntactic phenomena covered by the grammar. This is not only the first robust parser of Czech, but also one of the first robust parsers of a Slavic language. Since Slavic languages display a wide range of common features, it is reasonable to claim that this system may serve as a pattern for similar systems in other languages. To transfer the system into any other language it is only necessary to revise the grammar and to change the data contained in the dictionary (but not necessarily the structure of primary lexico-syntactic information). The formalism and methods used in this project can be used in other Slavic languages without substantial changes.