957 resultados para Speed up
Resumo:
We report the synthesis and application of some ion-tagged catalysts in organometallic catalysis and organocatalysis. With the installation of an ionic group on the backbone of a known catalyst, two main effects are generally obtained. i) a modification of the solubility of the catalyst: if judicious choice of the ion pair is made, the ion-tag can confer to the catalyst a solubility profile suitable for catalyst recycling. ii) the ionic group can play a non-innocent role in the process considered: if stabilizing interaction between the ionic group and the developing charges in the transition state are established, the reaction can speed up. We describe the use of ion-tagged diphenylprolinol as Zn ligand. The chiral ligand grafted onto an ionic liquid (IL) was recycled 10 times with no loss of reactivity and selectivity, when it was employed in the first example of enantioselective addition of ZnEt2 to aldehydes in ILs. An ammonium-tagged phosphine displayed the capability to stabilize Pd catalysts for the Suzuki reaction in ILs. The ionic phase was recycled 6 times with no detectable loss of activity and very low Pd leaching in the organic phase. This catalytic system was also employed for the functionalization of the challenging substrate 5,11-dibromotetracene. In the field of organocatalysis, we prepared two ion-tagged derivatives of the McMillan imidazolidinone. The results of the asymmetric Diels-Alder reaction between trans-cinnamaldehyde and cyclopentadiene exhibited great dependence on the position and nature of the ionic group. Finally, when O-TMS-diphenylprolinol was tagged with an imidazolium ion, exploiting a silyl ether linker, an efficient catalyst for the asymmetric addition of aldehydes to nitroolefins was achieved. The catalyst displayed enhanced reactivity and the same high level of selectivity of the untagged parent catalyst and it could be employed in a wide range of reaction conditions, included use of water as solvent.
Resumo:
This thesis deals with an investigation of Decomposition and Reformulation to solve Integer Linear Programming Problems. This method is often a very successful approach computationally, producing high-quality solutions for well-structured combinatorial optimization problems like vehicle routing, cutting stock, p-median and generalized assignment . However, until now the method has always been tailored to the specific problem under investigation. The principal innovation of this thesis is to develop a new framework able to apply this concept to a generic MIP problem. The new approach is thus capable of auto-decomposition and autoreformulation of the input problem applicable as a resolving black box algorithm and works as a complement and alternative to the normal resolving techniques. The idea of Decomposing and Reformulating (usually called in literature Dantzig and Wolfe Decomposition DWD) is, given a MIP, to convexify one (or more) subset(s) of constraints (slaves) and working on the partially convexified polyhedron(s) obtained. For a given MIP several decompositions can be defined depending from what sets of constraints we want to convexify. In this thesis we mainly reformulate MIPs using two sets of variables: the original variables and the extended variables (representing the exponential extreme points). The master constraints consist of the original constraints not included in any slaves plus the convexity constraint(s) and the linking constraints(ensuring that each original variable can be viewed as linear combination of extreme points of the slaves). The solution procedure consists of iteratively solving the reformulated MIP (master) and checking (pricing) if a variable of reduced costs exists, and in which case adding it to the master and solving it again (columns generation), or otherwise stopping the procedure. The advantage of using DWD is that the reformulated relaxation gives bounds stronger than the original LP relaxation, in addition it can be incorporated in a Branch and bound scheme (Branch and Price) in order to solve the problem to optimality. If the computational time for the pricing problem is reasonable this leads in practice to a stronger speed up in the solution time, specially when the convex hull of the slaves is easy to compute, usually because of its special structure.
Resumo:
3D video-fluoroscopy is an accurate but cumbersome technique to estimate natural or prosthetic human joint kinematics. This dissertation proposes innovative methodologies to improve the 3D fluoroscopic analysis reliability and usability. Being based on direct radiographic imaging of the joint, and avoiding soft tissue artefact that limits the accuracy of skin marker based techniques, the fluoroscopic analysis has a potential accuracy of the order of mm/deg or better. It can provide fundamental informations for clinical and methodological applications, but, notwithstanding the number of methodological protocols proposed in the literature, time consuming user interaction is exploited to obtain consistent results. The user-dependency prevented a reliable quantification of the actual accuracy and precision of the methods, and, consequently, slowed down the translation to the clinical practice. The objective of the present work was to speed up this process introducing methodological improvements in the analysis. In the thesis, the fluoroscopic analysis was characterized in depth, in order to evaluate its pros and cons, and to provide reliable solutions to overcome its limitations. To this aim, an analytical approach was followed. The major sources of error were isolated with in-silico preliminary studies as: (a) geometric distortion and calibration errors, (b) 2D images and 3D models resolutions, (c) incorrect contour extraction, (d) bone model symmetries, (e) optimization algorithm limitations, (f) user errors. The effect of each criticality was quantified, and verified with an in-vivo preliminary study on the elbow joint. The dominant source of error was identified in the limited extent of the convergence domain for the local optimization algorithms, which forced the user to manually specify the starting pose for the estimating process. To solve this problem, two different approaches were followed: to increase the optimal pose convergence basin, the local approach used sequential alignments of the 6 degrees of freedom in order of sensitivity, or a geometrical feature-based estimation of the initial conditions for the optimization; the global approach used an unsupervised memetic algorithm to optimally explore the search domain. The performances of the technique were evaluated with a series of in-silico studies and validated in-vitro with a phantom based comparison with a radiostereometric gold-standard. The accuracy of the method is joint-dependent, and for the intact knee joint, the new unsupervised algorithm guaranteed a maximum error lower than 0.5 mm for in-plane translations, 10 mm for out-of-plane translation, and of 3 deg for rotations in a mono-planar setup; and lower than 0.5 mm for translations and 1 deg for rotations in a bi-planar setups. The bi-planar setup is best suited when accurate results are needed, such as for methodological research studies. The mono-planar analysis may be enough for clinical application when the analysis time and cost may be an issue. A further reduction of the user interaction was obtained for prosthetic joints kinematics. A mixed region-growing and level-set segmentation method was proposed and halved the analysis time, delegating the computational burden to the machine. In-silico and in-vivo studies demonstrated that the reliability of the new semiautomatic method was comparable to a user defined manual gold-standard. The improved fluoroscopic analysis was finally applied to a first in-vivo methodological study on the foot kinematics. Preliminary evaluations showed that the presented methodology represents a feasible gold-standard for the validation of skin marker based foot kinematics protocols.
Resumo:
Graphene excellent properties make it a promising candidate for building future nanoelectronic devices. Nevertheless, the absence of an energy gap is an open problem for the transistor application. In this thesis, graphene nanoribbons and pattern-hydrogenated graphene, two alternatives for inducing an energy gap in graphene, are investigated by means of numerical simulations. A tight-binding NEGF code is developed for the simulation of GNR-FETs. To speed up the simulations, the non-parabolic effective mass model and the mode-space tight-binding method are developed. The code is used for simulation studies of both conventional and tunneling FETs. The simulations show the great potential of conventional narrow GNR-FETs, but highlight at the same time the leakage problems in the off-state due to various tunneling mechanisms. The leakage problems become more severe as the width of the devices is made larger, and thus the band gap smaller, resulting in a poor on/off current ratio. The tunneling FET architecture can partially solve these problems thanks to the improved subthreshold slope; however, it is also shown that edge roughness, unless well controlled, can have a detrimental effect in the off-state performance. In the second part of this thesis, pattern-hydrogenated graphene is simulated by means of a tight-binding model. A realistic model for patterned hydrogenation, including disorder, is developed. The model is validated by direct comparison of the momentum-energy resolved density of states with the experimental angle-resolved photoemission spectroscopy. The scaling of the energy gap and the localization length on the parameters defining the pattern geometry is also presented. The results suggest that a substantial transport gap can be attainable with experimentally achievable hydrogen concentration.
Resumo:
This thesis presents the outcomes of my Ph.D. course in telecommunications engineering. The focus of my research has been on Global Navigation Satellite Systems (GNSS) and in particular on the design of aiding schemes operating both at position and physical level and the evaluation of their feasibility and advantages. Assistance techniques at the position level are considered to enhance receiver availability in challenging scenarios where satellite visibility is limited. Novel positioning techniques relying on peer-to-peer interaction and exchange of information are thus introduced. More specifically two different techniques are proposed: the Pseudorange Sharing Algorithm (PSA), based on the exchange of GNSS data, that allows to obtain coarse positioning where the user has scarce satellite visibility, and the Hybrid approach, which also permits to improve the accuracy of the positioning solution. At the physical level, aiding schemes are investigated to improve the receiver’s ability to synchronize with satellite signals. An innovative code acquisition strategy for dual-band receivers, the Cross-Band Aiding (CBA) technique, is introduced to speed-up initial synchronization by exploiting the exchange of time references between the two bands. In addition vector configurations for code tracking are analyzed and their feedback generation process thoroughly investigated.
Resumo:
By pulling and releasing the tension on protein homomers with the Atomic Force Miscroscope (AFM) at different pulling speeds, dwell times and dwell distances, the observed force-response of the protein can be fitted with suitable theoretical models. In this respect we developed mathematical procedures and open-source computer codes for driving such experiments and fitting Bell’s model to experimental protein unfolding forces and protein folding frequencies. We applied the above techniques to the study of proteins GB1 (the B1 IgG-binding domain of protein G from Streptococcus) and I27 (a module of human cardiac titin) in aqueous solutions of protecting osmolytes such as dimethyl sulfoxide (DMSO), glycerol and trimethylamine N-oxide (TMAO). In order to get a molecular understanding of the experimental results we developed an Ising-like model for proteins that incorporates the osmophobic nature of their backbone. The model benefits from analytical thermodynamics and kinetics amenable to Monte-Carlo simulation. The prevailing view used to be that small protecting osmolytes bridge the separating beta-strands of proteins with mechanical resistance, presumably shifting the transition state to significantly higher distances that correlate with the molecular size of the osmolyte molecules. Our experiments showed instead that protecting osmolytes slow down protein unfolding and speed-up protein folding at physiological pH without shifting the protein transition state on the mechanical reaction coordinate. Together with the theoretical results of the Ising-model, our results lend support to the osmophobic theory according to which osmolyte stabilisation is a result of the preferential exclusion of the osmolyte molecules from the protein backbone. The results obtained during this thesis work have markedly improved our understanding of the strategy selected by Nature to strengthen protein stability in hostile environments, shifting the focus from hypothetical protein-osmolyte interactions to the more general mechanism based on the osmophobicity of the protein backbone.
Resumo:
Despite several clinical tests that have been developed to qualitatively describe complex motor tasks by functional testing, these methods often depend on clinicians' interpretation, experience and training, which make the assessment results inconsistent, without the precision required to objectively assess the effect of the rehabilitative intervention. A more detailed characterization is required to fully capture the various aspects of motor control and performance during complex movements of lower and upper limbs. The need for cost-effective and clinically applicable instrumented tests would enable quantitative assessment of performance on a subject-specific basis, overcoming the limitations due to the lack of objectiveness related to individual judgment, and possibly disclosing subtle alterations that are not clearly visible to the observer. Postural motion measurements at additional locations, such as lower and upper limbs and trunk, may be necessary in order to obtain information about the inter-segmental coordination during different functional tests involved in clinical practice. With these considerations in mind, this Thesis aims: i) to suggest a novel quantitative assessment tool for the kinematics and dynamics evaluation of a multi-link kinematic chain during several functional motor tasks (i.e. squat, sit-to-stand, postural sway), using one single-axis accelerometer per segment, ii) to present a novel quantitative technique for the upper limb joint kinematics estimation, considering a 3-link kinematic chain during the Fugl-Meyer Motor Assessment and using one inertial measurement unit per segment. The suggested methods could have several positive feedbacks from clinical practice. The use of objective biomechanical measurements, provided by inertial sensor-based technique, may help clinicians to: i) objectively track changes in motor ability, ii) provide timely feedback about the effectiveness of administered rehabilitation interventions, iii) enable intervention strategies to be modified or changed if found to be ineffective, and iv) speed up the experimental sessions when several subjects are asked to perform different functional tests.
Resumo:
The aim of my thesis is to parallelize the Weighting Histogram Analysis Method (WHAM), which is a popular algorithm used to calculate the Free Energy of a molucular system in Molecular Dynamics simulations. WHAM works in post processing in cooperation with another algorithm called Umbrella Sampling. Umbrella Sampling has the purpose to add a biasing in the potential energy of the system in order to force the system to sample a specific region in the configurational space. Several N independent simulations are performed in order to sample all the region of interest. Subsequently, the WHAM algorithm is used to estimate the original system energy starting from the N atomic trajectories. The parallelization of WHAM has been performed through CUDA, a language that allows to work in GPUs of NVIDIA graphic cards, which have a parallel achitecture. The parallel implementation may sensibly speed up the WHAM execution compared to previous serial CPU imlementations. However, the WHAM CPU code presents some temporal criticalities to very high numbers of interactions. The algorithm has been written in C++ and executed in UNIX systems provided with NVIDIA graphic cards. The results were satisfying obtaining an increase of performances when the model was executed on graphics cards with compute capability greater. Nonetheless, the GPUs used to test the algorithm is quite old and not designated for scientific calculations. It is likely that a further performance increase will be obtained if the algorithm would be executed in clusters of GPU at high level of computational efficiency. The thesis is organized in the following way: I will first describe the mathematical formulation of Umbrella Sampling and WHAM algorithm with their apllications in the study of ionic channels and in Molecular Docking (Chapter 1); then, I will present the CUDA architectures used to implement the model (Chapter 2); and finally, the results obtained on model systems will be presented (Chapter 3).
Resumo:
In this report a new automated optical test for next generation of photonic integrated circuits (PICs) is provided by the test-bed design and assessment. After a briefly analysis of critical problems of actual optical tests, the main test features are defined: automation and flexibility, relaxed alignment procedure, speed up of entire test and data reliability. After studying varied solutions, the test-bed components are defined to be lens array, photo-detector array, and software controller. Each device is studied and calibrated, the spatial resolution, and reliability against interference at the photo-detector array are studied. The software is programmed in order to manage both PIC input, and photo-detector array output as well as data analysis. The test is validated by analysing state-of-art 16 ports PIC: the waveguide location, current versus power, and time-spatial power distribution are measured as well as the optical continuity of an entire path of PIC. Complexity, alignment tolerance, time of measurement are also discussed.
Resumo:
In dieser Arbeit wird ein vergröbertes (engl. coarse-grained, CG) Simulationsmodell für Peptide in wässriger Lösung entwickelt. In einem CG Verfahren reduziert man die Anzahl der Freiheitsgrade des Systems, so dass manrngrössere Systeme auf längeren Zeitskalen untersuchen kann. Die Wechselwirkungspotentiale des CG Modells sind so aufgebaut, dass die Peptid Konformationen eines höher aufgelösten (atomistischen) Modells reproduziert werden.rnIn dieser Arbeit wird der Einfluss unterschiedlicher bindender Wechsel-rnwirkungspotentiale in der CG Simulation untersucht, insbesondere daraufhin,rnin wie weit das Konformationsgleichgewicht der atomistischen Simulation reproduziert werden kann. Im CG Verfahren verliert man per Konstruktionrnmikroskopische strukturelle Details des Peptids, zum Beispiel, Korrelationen zwischen Freiheitsgraden entlang der Peptidkette. In der Dissertationrnwird gezeigt, dass diese “verlorenen” Eigenschaften in einem Rückabbildungsverfahren wiederhergestellt werden können, in dem die atomistischen Freiheitsgrade wieder in die CG-Strukturen eingefügt werden. Dies gelingt, solange die Konformationen des CG Modells grundsätzlich gut mit der atomistischen Ebene übereinstimmen. Die erwähnten Korrelationen spielen einerngrosse Rolle bei der Bildung von Sekundärstrukturen und sind somit vonrnentscheidender Bedeutung für ein realistisches Ensemble von Peptidkonformationen. Es wird gezeigt, dass für eine gute Übereinstimmung zwischen CG und atomistischen Kettenkonformationen spezielle bindende Wechselwirkungen wie zum Beispiel 1-5 Bindungs- und 1,3,5-Winkelpotentiale erforderlich sind. Die intramolekularen Parameter (d.h. Bindungen, Winkel, Torsionen), die für kurze Oligopeptide parametrisiert wurden, sind übertragbarrnauf längere Peptidsequenzen. Allerdings können diese gebundenen Wechselwirkungen nur in Kombination mit solchen nichtbindenden Wechselwirkungspotentialen kombiniert werden, die bei der Parametrisierung verwendet werden, sind also zum Beispiel nicht ohne weiteres mit einem andere Wasser-Modell kombinierbar. Da die Energielandschaft in CG-Simulationen glatter ist als im atomistischen Modell, gibt es eine Beschleunigung in der Dynamik. Diese Beschleunigung ist unterschiedlich für verschiedene dynamische Prozesse, zum Beispiel für verschiedene Arten von Bewegungen (Rotation und Translation). Dies ist ein wichtiger Aspekt bei der Untersuchung der Kinetik von Strukturbildungsprozessen, zum Beispiel Peptid Aggregation.rn
Resumo:
The aim of this work is to present various aspects of numerical simulation of particle and radiation transport for industrial and environmental protection applications, to enable the analysis of complex physical processes in a fast, reliable, and efficient way. In the first part we deal with speed-up of numerical simulation of neutron transport for nuclear reactor core analysis. The convergence properties of the source iteration scheme of the Method of Characteristics applied to be heterogeneous structured geometries has been enhanced by means of Boundary Projection Acceleration, enabling the study of 2D and 3D geometries with transport theory without spatial homogenization. The computational performances have been verified with the C5G7 2D and 3D benchmarks, showing a sensible reduction of iterations and CPU time. The second part is devoted to the study of temperature-dependent elastic scattering of neutrons for heavy isotopes near to the thermal zone. A numerical computation of the Doppler convolution of the elastic scattering kernel based on the gas model is presented, for a general energy dependent cross section and scattering law in the center of mass system. The range of integration has been optimized employing a numerical cutoff, allowing a faster numerical evaluation of the convolution integral. Legendre moments of the transfer kernel are subsequently obtained by direct quadrature and a numerical analysis of the convergence is presented. In the third part we focus our attention to remote sensing applications of radiative transfer employed to investigate the Earth's cryosphere. The photon transport equation is applied to simulate reflectivity of glaciers varying the age of the layer of snow or ice, its thickness, the presence or not other underlying layers, the degree of dust included in the snow, creating a framework able to decipher spectral signals collected by orbiting detectors.
Resumo:
In den westlichen Industrieländern ist das Mammakarzinom der häufigste bösartige Tumor der Frau. Sein weltweiter Anteil an allen Krebserkrankungen der Frau beläuft sich auf etwa 21 %. Inzwischen ist jede neunte Frau bedroht, während ihres Lebens an Brustkrebs zu erkranken. Die alterstandardisierte Mortalitätrate liegt derzeit bei knapp 27 %.rnrnDas Mammakarzinom hat eine relative geringe Wachstumsrate. Die Existenz eines diagnostischen Verfahrens, mit dem alle Mammakarzinome unter 10 mm Durchmesser erkannt und entfernt werden, würden den Tod durch Brustkrebs praktisch beseitigen. Denn die 20-Jahres-Überlebungsrate bei Erkrankung durch initiale Karzinome der Größe 5 bis 10 mm liegt mit über 95 % sehr hoch.rnrnMit der Kontrastmittel gestützten Bildgebung durch die MRT steht eine relativ junge Untersuchungsmethode zur Verfügung, die sensitiv genug zur Erkennung von Karzinomen ab einer Größe von 3 mm Durchmesser ist. Die diagnostische Methodik ist jedoch komplex, fehleranfällig, erfordert eine lange Einarbeitungszeit und somit viel Erfahrung des Radiologen.rnrnEine Computer unterstützte Diagnosesoftware kann die Qualität einer solch komplexen Diagnose erhöhen oder zumindest den Prozess beschleunigen. Das Ziel dieser Arbeit ist die Entwicklung einer vollautomatischen Diagnose Software, die als Zweitmeinungssystem eingesetzt werden kann. Meines Wissens existiert eine solche komplette Software bis heute nicht.rnrnDie Software führt eine Kette von verschiedenen Bildverarbeitungsschritten aus, die dem Vorgehen des Radiologen nachgeahmt wurden. Als Ergebnis wird eine selbstständige Diagnose für jede gefundene Läsion erstellt: Zuerst eleminiert eine 3d Bildregistrierung Bewegungsartefakte als Vorverarbeitungsschritt, um die Bildqualität der nachfolgenden Verarbeitungsschritte zu verbessern. Jedes kontrastanreichernde Objekt wird durch eine regelbasierte Segmentierung mit adaptiven Schwellwerten detektiert. Durch die Berechnung kinetischer und morphologischer Merkmale werden die Eigenschaften der Kontrastmittelaufnahme, Form-, Rand- und Textureeigenschaften für jedes Objekt beschrieben. Abschließend werden basierend auf den erhobenen Featurevektor durch zwei trainierte neuronale Netze jedes Objekt in zusätzliche Funde oder in gut- oder bösartige Läsionen klassifiziert.rnrnDie Leistungsfähigkeit der Software wurde auf Bilddaten von 101 weiblichen Patientinnen getested, die 141 histologisch gesicherte Läsionen enthielten. Die Vorhersage der Gesundheit dieser Läsionen ergab eine Sensitivität von 88 % bei einer Spezifität von 72 %. Diese Werte sind den in der Literatur bekannten Vorhersagen von Expertenradiologen ähnlich. Die Vorhersagen enthielten durchschnittlich 2,5 zusätzliche bösartige Funde pro Patientin, die sich als falsch klassifizierte Artefakte herausstellten.rn
Resumo:
The columnar growth habit of apple is interesting from an economic point of view as the pillar-like trees require little space and labor. Genetic engineering could be used to speed up breeding for columnar trees with high fruit quality and disease resistance. For this purpose, this study dealt with the molecular causes of this interesting phenotype. The original bud sport mutation that led to the columnar growth habit was found to be a novel nested insertion of a Gypsy-44 LTR retrotransposon on chromosome 10 at 18.79 Mb. This subsequently causes tissue-specific differential expression of nearby downstream genes, particularly of a gene encoding a 2OG-Fe(II) oxygenase of unknown function (dmr6-like) that is strongly upregulated in developing aerial tissues of columnar trees. The tissue-specificity of the differential expression suggests involvement of cis-regulatory regions and/or tissue-specific epigenetic markers whose influence on gene expression is altered due to the retrotransposon insertion. This eventually leads to changes in genes associated with stress and defense reactions, cell wall and cell membrane metabolism as well as phytohormone biosynthesis and signaling, which act together to cause the typical phenotype characteristics of columnar trees such as short internodes and the absence of long lateral branches. In future, transformation experiments introducing Gypsy-44 into non-columnar varieties or excising Gypsy-44 from columnar varieties would provide proof for our hypotheses. However, since site-specific transformation of a nested retrotransposon is a (too) ambitious objective, silencing of the Gypsy-44 transcripts or the nearby genes would also provide helpful clues.
Resumo:
In this thesis we present techniques that can be used to speed up the calculation of perturbative matrix elements for observables with many legs ($n = 3, 4, 5, 6, 7, ldots$). We investigate several ways to achieve this, including the use of Monte Carlo methods, the leading-color approximation, numerically less precise but faster operations, and SSE-vectorization. An important idea is the use of enquote{random polarizations} for which we derive subtraction terms for the real corrections in next-to-leading order calculations. We present the effectiveness of all these methods in the context of electron-positron scattering to $n$ jets, $n$ ranging from two to seven.
Resumo:
Coarse graining is a popular technique used in physics to speed up the computer simulation of molecular fluids. An essential part of this technique is a method that solves the inverse problem of determining the interaction potential or its parameters from the given structural data. Due to discrepancies between model and reality, the potential is not unique, such that stability of such method and its convergence to a meaningful solution are issues.rnrnIn this work, we investigate empirically whether coarse graining can be improved by applying the theory of inverse problems from applied mathematics. In particular, we use the singular value analysis to reveal the weak interaction parameters, that have a negligible influence on the structure of the fluid and which cause non-uniqueness of the solution. Further, we apply a regularizing Levenberg-Marquardt method, which is stable against the mentioned discrepancies. Then, we compare it to the existing physical methods - the Iterative Boltzmann Inversion and the Inverse Monte Carlo method, which are fast and well adapted to the problem, but sometimes have convergence problems.rnrnFrom analysis of the Iterative Boltzmann Inversion, we elaborate a meaningful approximation of the structure and use it to derive a modification of the Levenberg-Marquardt method. We engage the latter for reconstruction of the interaction parameters from experimental data for liquid argon and nitrogen. We show that the modified method is stable, convergent and fast. Further, the singular value analysis of the structure and its approximation allows to determine the crucial interaction parameters, that is, to simplify the modeling of interactions. Therefore, our results build a rigorous bridge between the inverse problem from physics and the powerful solution tools from mathematics. rn