971 resultados para Parallel design multicenter


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the highly competitive world of modern finance, new derivatives are continually required to take advantage of changes in financial markets, and to hedge businesses against new risks. The research described in this paper aims to accelerate the development and pricing of new derivatives in two different ways. Firstly, new derivatives can be specified mathematically within a general framework, enabling new mathematical formulae to be specified rather than just new parameter settings. This Generic Pricing Engine (GPE) is expressively powerful enough to specify a wide range of stand¬ard pricing engines. Secondly, the associated price simulation using the Monte Carlo method is accelerated using GPU or multicore hardware. The parallel implementation (in OpenCL) is automatically derived from the mathematical description of the derivative. As a test, for a Basket Option Pricing Engine (BOPE) generated using the GPE, on the largest problem size, an NVidia GPU runs the generated pricing engine at 45 times the speed of a sequential, specific hand-coded implementation of the same BOPE. Thus a user can more rapidly devise, simulate and experiment with new derivatives without actual programming.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background and Objectives: Schizophrenia is a severe chronic disease. Endpoint variables lack objectivity and the diagnostic criteria have evolved with time. In order to guide the development of new drugs, European Medicines Agency (EMA) issued a guideline on the clinical investigation of medicinal products for the treatment of schizophrenia. Methods: Authors reviewed and discussed the efficacy trial part of the Guideline. Results: The Guideline divides clinical efficacy trials into short-term trials and long-term trials. The short-term three-arm trial is recommended to replace the short-term two-arm active-controlled non-inferiority trial because the latter has sensitivity issues. The Guideline ultimately makes that three-arm trial a superiority trial. The Guideline discusses four types of long-term trial designs. The randomized withdrawal trial design has some disadvantages. Long-term two-arm active-controlled non-inferiority trial is not recommended due to the sensitivity issue. Extension of the short-term trial is only suitable for extension of the short-term two-arm active-controlled superiority trial. The Guideline suggests that a hybrid design of a randomized withdrawal trial incorporated into a long-term parallel trial might be optimal. However, such a design has some disadvantages and might be too complex to be carried out. Authors suggest instead a three-group long-term trial design, which could provide comparison between test drug and active comparator along with comparison between the test drug and placebo. This alternative could arguably be much easier to carry out compared with the hybrid design. Conclusions: The three-group long-term design merits further discussion and evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vertebrate genomes are organised into a variety of nuclear environments and chromatin states that have profound effects on the regulation of gene transcription. This variation presents a major challenge to the expression of transgenes for experimental research, genetic therapies and the production of biopharmaceuticals. The majority of transgenes succumb to transcriptional silencing by their chromosomal environment when they are randomly integrated into the genome, a phenomenon known as chromosomal position effect (CPE). It is not always feasible to target transgene integration to transcriptionally permissive “safe harbour” loci that favour transgene expression, so there remains an unmet need to identify gene regulatory elements that can be added to transgenes which protect them against CPE. Dominant regulatory elements (DREs) with chromatin barrier (or boundary) activity have been shown to protect transgenes from CPE. The HS4 element from the chicken beta-globin locus and the A2UCOE element from a human housekeeping gene locus have been shown to function as DRE barriers in a wide variety of cell types and species. Despite rapid advances in the profiling of transcription factor binding, chromatin states and chromosomal looping interactions, progress towards functionally validating the many candidate barrier elements in vertebrates has been very slow. This is largely due to the lack of a tractable and efficient assay for chromatin barrier activity. In this study, I have developed the RGBarrier assay system to test the chromatin barrier activity of candidate DREs at pre-defined isogenic loci in human cells. The RGBarrier assay consists in a Flp-based RMCE reaction for the integration of an expression construct, carrying candidate DREs, in a pre-characterised chromosomal location. The RGBarrier system involves the tracking of red, green and blue fluorescent proteins by flow cytometry to monitor on-target versus off-target integration and transgene expression. The analysis of the reporter (GFP) expression for several weeks gives a measure of the protective ability of each candidate elements from chromosomal silencing. This assay can be scaled up to test tens of new putative barrier elements in the same chromosomal context in parallel. The defined chromosomal contexts of the RGBarrier assays will allow for detailed mechanistic studies of chromosomal silencing and DRE barrier element action. Understanding these mechanisms will be of paramount importance for the design of specific solutions for overcoming chromosomal silencing in specific transgenic applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Virtual Screening (VS) methods can considerably aid clinical research, predicting how ligands interact with drug targets. Most VS methods suppose a unique binding site for the target, but it has been demonstrated that diverse ligands interact with unrelated parts of the target and many VS methods do not take into account this relevant fact. This problem is circumvented by a novel VS methodology named BINDSURF that scans the whole protein surface to find new hotspots, where ligands might potentially interact with, and which is implemented in massively parallel Graphics Processing Units, allowing fast processing of large ligand databases. BINDSURF can thus be used in drug discovery, drug design, drug repurposing and therefore helps considerably in clinical research. However, the accuracy of most VS methods is constrained by limitations in the scoring function that describes biomolecular interactions, and even nowadays these uncertainties are not completely understood. In order to solve this problem, we propose a novel approach where neural networks are trained with databases of known active (drugs) and inactive compounds, and later used to improve VS predictions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Integrated circuit scaling has enabled a huge growth in processing capability, which necessitates a corresponding increase in inter-chip communication bandwidth. As bandwidth requirements for chip-to-chip interconnection scale, deficiencies of electrical channels become more apparent. Optical links present a viable alternative due to their low frequency-dependent loss and higher bandwidth density in the form of wavelength division multiplexing. As integrated photonics and bonding technologies are maturing, commercialization of hybrid-integrated optical links are becoming a reality. Increasing silicon integration leads to better performance in optical links but necessitates a corresponding co-design strategy in both electronics and photonics. In this light, holistic design of high-speed optical links with an in-depth understanding of photonics and state-of-the-art electronics brings their performance to unprecedented levels. This thesis presents developments in high-speed optical links by co-designing and co-integrating the primary elements of an optical link: receiver, transmitter, and clocking.

In the first part of this thesis a 3D-integrated CMOS/Silicon-photonic receiver will be presented. The electronic chip features a novel design that employs a low-bandwidth TIA front-end, double-sampling and equalization through dynamic offset modulation. Measured results show -14.9dBm of sensitivity and energy efficiency of 170fJ/b at 25Gb/s. The same receiver front-end is also used to implement source-synchronous 4-channel WDM-based parallel optical receiver. Quadrature ILO-based clocking is employed for synchronization and a novel frequency-tracking method that exploits the dynamics of IL in a quadrature ring oscillator to increase the effective locking range. An adaptive body-biasing circuit is designed to maintain the per-bit-energy consumption constant across wide data-rates. The prototype measurements indicate a record-low power consumption of 153fJ/b at 32Gb/s. The receiver sensitivity is measured to be -8.8dBm at 32Gb/s.

Next, on the optical transmitter side, three new techniques will be presented. First one is a differential ring modulator that breaks the optical bandwidth/quality factor trade-off known to limit the speed of high-Q ring modulators. This structure maintains a constant energy in the ring to avoid pattern-dependent power droop. As a first proof of concept, a prototype has been fabricated and measured up to 10Gb/s. The second technique is thermal stabilization of micro-ring resonator modulators through direct measurement of temperature using a monolithic PTAT temperature sensor. The measured temperature is used in a feedback loop to adjust the thermal tuner of the ring. A prototype is fabricated and a closed-loop feedback system is demonstrated to operate at 20Gb/s in the presence of temperature fluctuations. The third technique is a switched-capacitor based pre-emphasis technique designed to extend the inherently low bandwidth of carrier injection micro-ring modulators. A measured prototype of the optical transmitter achieves energy efficiency of 342fJ/bit at 10Gb/s and the wavelength stabilization circuit based on the monolithic PTAT sensor consumes 0.29mW.

Lastly, a first-order frequency synthesizer that is suitable for high-speed on-chip clock generation will be discussed. The proposed design features an architecture combining an LC quadrature VCO, two sample-and-holds, a PI, digital coarse-tuning, and rotational frequency detection for fine-tuning. In addition to an electrical reference clock, as an extra feature, the prototype chip is capable of receiving a low jitter optical reference clock generated by a high-repetition-rate mode-locked laser. The output clock at 8GHz has an integrated RMS jitter of 490fs, peak-to-peak periodic jitter of 2.06ps, and total RMS jitter of 680fs. The reference spurs are measured to be –64.3dB below the carrier frequency. At 8GHz the system consumes 2.49mW from a 1V supply.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hemophilic arthropathy limits daily life activities of patients with hemophilia, presenting with clinical manifestations such as chronic pain, limited mobility, or muscular atrophy. Although physical therapy is considered essential for these patients, few clinical studies have demonstrated the efficacy and safety of the various physiotherapy techniques. Physical therapy may be useful for treating hemophilic arthropathy by applying safe and effective techniques. However, it is necessary to create protocols for possible treatments to avoid the risk of bleeding in these patients. This article describes the musculoskeletal pathology of hemophilic arthropathy and characteristics of fascial therapy. This systematic protocol for treatment by fascial therapy of knee and ankle arthropathy in patients with hemophilia provides an analysis of the techniques that, depending on their purpose and methodology, can be used in these patients. Similarly, the protocol's applicability is analyzed and the steps to be followed in future research studies are described. Fascial therapy is a promising physiotherapy technique for treating fascial tissue and joint contractures in patients with hemophilic arthropathy. More research is needed to assess the efficacy and safety of this intervention in patients with hemophilia, particularly with randomized multicenter clinical trials

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Shearing is the process where sheet metal is mechanically cut between two tools. Various shearing technologies are commonly used in the sheet metal industry, for example, in cut to length lines, slitting lines, end cropping etc. Shearing has speed and cost advantages over competing cutting methods like laser and plasma cutting, but involves large forces on the equipment and large strains in the sheet material. The constant development of sheet metals toward higher strength and formability leads to increased forces on the shearing equipment and tools. Shearing of new sheet materials imply new suitable shearing parameters. Investigations of the shearing parameters through live tests in the production are expensive and separate experiments are time consuming and requires specialized equipment. Studies involving a large number of parameters and coupled effects are therefore preferably performed by finite element based simulations. Accurate experimental data is still a prerequisite to validate such simulations. There is, however, a shortage of accurate experimental data to validate such simulations. In industrial shearing processes, measured forces are always larger than the actual forces acting on the sheet, due to friction losses. Shearing also generates a force that attempts to separate the two tools with changed shearing conditions through increased clearance between the tools as result. Tool clearance is also the most common shearing parameter to adjust, depending on material grade and sheet thickness, to moderate the required force and to control the final sheared edge geometry. In this work, an experimental procedure that provides a stable tool clearance together with accurate measurements of tool forces and tool displacements, was designed, built and evaluated. Important shearing parameters and demands on the experimental set-up were identified in a sensitivity analysis performed with finite element simulations under the assumption of plane strain. With respect to large tool clearance stability and accurate force measurements, a symmetric experiment with two simultaneous shears and internal balancing of forces attempting to separate the tools was constructed. Steel sheets of different strength levels were sheared using the above mentioned experimental set-up, with various tool clearances, sheet clamping and rake angles. Results showed that tool penetration before fracture decreased with increased material strength. When one side of the sheet was left unclamped and free to move, the required shearing force decreased but instead the force attempting to separate the two tools increased. Further, the maximum shearing force decreased and the rollover increased with increased tool clearance. Digital image correlation was applied to measure strains on the sheet surface. The obtained strain fields, together with a material model, were used to compute the stress state in the sheet. A comparison, up to crack initiation, of these experimental results with corresponding results from finite element simulations in three dimensions and at a plane strain approximation showed that effective strains on the surface are representative also for the bulk material. A simple model was successfully applied to calculate the tool forces in shearing with angled tools from forces measured with parallel tools. These results suggest that, with respect to tool forces, a plane strain approximation is valid also at angled tools, at least for small rake angles. In general terms, this study provide a stable symmetric experimental set-up with internal balancing of lateral forces, for accurate measurements of tool forces, tool displacements, and sheet deformations, to study the effects of important shearing parameters. The results give further insight to the strain and stress conditions at crack initiation during shearing, and can also be used to validate models of the shearing process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Integrins are α/β-heterodimeric transmembrane adhesion receptors that mediate cell-cell and cell-ECM interactions. Integrins are bidirectional signalling receptors that respond to external signals (“outside-in” signalling) and in parallel, transduce internal signals to the matrix (“inside-out” signalling), to regulate vital cellular functions including migration, survival, growth and differentiation. Therefore, dysregulation of these tightly regulated processes often results in uncontrolled integrin activation and abnormal tissue expression that is responsible for many diseases. Because of their important roles in physiological and pathological events, they represent a validated target for therapeutic and diagnostic purposes. The aim of the present Thesis was focused on the development of peptidic ligands for α4β1 and αvβ3 integrin subtypes, involved in inflammatory responses (leukocytes recruitment and extravasation) and cancer progression (angiogenesis, tumor growth, metastasis), respectively. Following the peptidomimetic strategy, we designed and synthesized a small library of linear and cyclic hybrid α/β-peptidomimetics based on the phenylureido-LDV scaffolds for the treatment of chronic inflammatory autoimmune diseases. In order to implement a fast and non-invasive diagnostic method for monitoring the course of the inflammatory processes, a flat glass-surface of dye-loaded Zeolite L-crystal nanoparticles was coated with bioactive α4β1-peptidomimetics to detect specific integrin-expressing cells as biomarkers of inflammatory diseases. Targeted drug delivery has been considered a promising alternative to overcome the pharmacokinetic limitations of conventional anticancer drugs. Thus, a novel Small-Molecule Drug Conjugate was synthesized by connecting the highly cytotoxic Cryptophycin to the tumor-targeting RGDfK-peptide through a protease-cleavable linker. Finally, in view to making the peptide synthesis more sustainable and greener, we developed an alternative method for peptide bonds formation employing solvent-free mechanochemistry and ultra-mild minimal solvent-grinding conditions in common, inexpensive laboratory equipment. To this purpose, standard amino acids, coupling agents and organic-green solvents were used in the presence of nanocrystalline hydroxyapatite as a reusable, bio-compatible inorganic basic catalyst.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neuroinflammation constitutes a major player in the etiopathology of neurodegenerative diseases (NDDs), by orchestrating several neurotoxic pathways which in concert lead to neurodegeneration. A positive feedback loop occurs between inflammation, microglia activation and misfolding processes that, alongside excitotoxicity and oxidative events, represent crucial features of this intricate scenario. The multi-layered nature of NDDs requires a deepen investigation on how these vicious cycles work. This could further help in the search for effective treatments. Electrophiles are critically involved in the modulation of a variety of neuroprotective responses. Thus, we envisioned their peculiar ability to switch on/off biological activities as a powerful tool for investigating the neurotoxic scenario driven by inflammation in NDDs. In particular, in this thesis project, we wanted to dissect at a molecular level the functional role of (pro)electrophilic moieties of previously synthesized thioesters of variously substituted trans-cinnamic acids, to identify crucial features which could interfere with amyloid aggregation as well as modulate Nrf2 and/or NF-κB activation. To this aim, we first synthesized new compounds to identify bioactive cores which could specifically modulate the intended target. Then, we systematically modified their structure to reach additional pathogenic pathways which could in tandem contribute to the inflammatory process. In particular, following the investigation of the mechanistic underpinnings involving the catechol feature in amyloid binding through the synthesis of new dihydroxyl derivatives, we incorporated the identified antiaggregating nucleus into constrained frames which could contrast neuroinflammation also through the modulation of CB2Rs. In parallel, Nrf2 and/or NF-κB antinflammatory structural requirements were combined with the neuroprotective cores of pioglitazone, an antidiabetic drug endowed with MAO-B inhibitory properties, and memantine, which notably contrasts excitotoxicity. By acting as Swiss army knives, the new set of molecules emerge as promising tools to deepen our insights into the complex scenario regulating NDDs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The trend related to the turnover of internal combustion engine vehicles with EVs goes by the name of electrification. The push electrification experienced in the last decade is linked to the still ongoing evolution in power electronics technology for charging systems. This is the reason why an evolution in testing strategies and testing equipment is crucial too. The project this dissertation is based on concerns the investigation of a new EV simulator design. that optimizes the structure of the testing equipment used by the company who commissioned this work. Project requirements can be summarized in the following two points: space occupation reduction and parallel charging implementation. Some components were completely redesigned, and others were substituted with equivalent ones that could perform the same tasks. In this way it was possible to reduce the space occupation of the simulator, as well as to increase the efficiency of the testing device. Moreover, the possibility of conjugating different charging simulations could be investigated by parallelly launching two testing procedures on a unique machine, properly predisposed for supporting the two charging protocols used. On the back of the results achieved in the body of this dissertation, a new design for the EV simulator was proposed. In this way, space reduction was obtained, and space occupation efficiency was improved with the proposed new design. The testing device thus resulted to be way more compact, enabling to gain in safety and productivity, along with a 25% cost reduction. Furthermore, parallel charging was implemented in the proposed new design since the conducted tests clearly showed the feasibility of parallel charging sessions. The results presented in this work can thus be implemented to build the first prototype of the new EV simulator.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High Energy efficiency and high performance are the key regiments for Internet of Things (IoT) end-nodes. Exploiting cluster of multiple programmable processors has recently emerged as a suitable solution to address this challenge. However, one of the main bottlenecks for multi-core architectures is the instruction cache. While private caches fall into data replication and wasting area, fully shared caches lack scalability and form a bottleneck for the operating frequency. Hence we propose a hybrid solution where a larger shared cache (L1.5) is shared by multiple cores connected through a low-latency interconnect to small private caches (L1). However, it is still limited by large capacity miss with a small L1. Thus, we propose a sequential prefetch from L1 to L1.5 to improve the performance with little area overhead. Moreover, to cut the critical path for better timing, we optimized the core instruction fetch stage with non-blocking transfer by adopting a 4 x 32-bit ring buffer FIFO and adding a pipeline for the conditional branch. We present a detailed comparison of different instruction cache architectures' performance and energy efficiency recently proposed for Parallel Ultra-Low-Power clusters. On average, when executing a set of real-life IoT applications, our two-level cache improves the performance by up to 20% and loses 7% energy efficiency with respect to the private cache. Compared to a shared cache system, it improves performance by up to 17% and keeps the same energy efficiency. In the end, up to 20% timing (maximum frequency) improvement and software control enable the two-level instruction cache with prefetch adapt to various battery-powered usage cases to balance high performance and energy efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Continuum parallel robots (CPRs) are manipulators employing multiple flexible beams arranged in parallel and connected to a rigid end-effector. CPRs promise higher payload and accuracy than serial CRs while keeping great flexibility. As the risk of injury during accidental contacts between a human and a CPR should be reduced, CPRs may be used in large-scale collaborative tasks or assisted robotic surgery. There exist various CPR designs, but the prototype conception is rarely based on performance considerations, and the CPRs realization in mainly based on intuitions or rigid-link parallel manipulators architectures. This thesis focuses on the performance analysis of CPRs, and the tools needed for such evaluation, such as workspace computation algorithms. In particular, workspace computation strategies for CPRs are essential for the performance assessment, since the CPRs workspace may be used as a performance index or it can serve for optimal-design tools. Two new workspace computation algorithms are proposed in this manuscript, the former focusing on the workspace volume computation and the certification of its numerical results, while the latter aims at computing the workspace boundary only. Due to the elastic nature of CPRs, a key performance indicator for these robots is the stability of their equilibrium configurations. This thesis proposes the experimental validation of the equilibrium stability assessment on a real prototype, demonstrating limitations of some commonly used assumptions. Additionally, a performance index measuring the distance to instability is originally proposed in this manuscript. Differently from the majority of the existing approaches, the clear advantage of the proposed index is a sound physical meaning; accordingly, the index can be used for a more straightforward performance quantification, and to derive robot specifications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Photoplethysmography (PPG) sensors allow for noninvasive and comfortable heart-rate (HR) monitoring, suitable for compact wearable devices. However, PPG signals collected from such devices often suffer from corruption caused by motion artifacts. This is typically addressed by combining the PPG signal with acceleration measurements from an inertial sensor. Recently, different energy-efficient deep learning approaches for heart rate estimation have been proposed. To test these new solutions, in this work, we developed a highly wearable platform (42mm x 48 mm x 1.2mm) for PPG signal acquisition and processing, based on GAP9, a parallel ultra low power system-on-chip featuring nine cores RISC-V compute cluster with neural network accelerator and 1 core RISC-V controller. The hardware platform also integrates a commercial complete Optical Biosensing Module and an ARM-Cortex M4 microcontroller unit (MCU) with Bluetooth low-energy connectivity. To demonstrate the capabilities of the system, a deep learning-based approach for PPG-based HR estimation has been deployed. Thanks to the reduced power consumption of the digital computational platform, the total power budget is just 2.67 mW providing up to 5 days of operation (105 mAh battery).