14 resultados para Design Platform


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Microneedles (MNs) are a minimally invasive drug delivery platform, designed to enhance transdermal drug delivery by breaching the stratum corneum. For the first time, this study describes the simultaneous delivery of a combination of three drugs using a dissolving polymeric MN system. In the present study, aspirin, lisinopril dihydrate, and atorvastatin calcium trihydrate were used as exemplar cardiovascular drugs and formulated into MN arrays using two biocompatible polymers, poly(vinylpyrrollidone) and poly(methylvinylether/maleic acid). Following fabrication, dissolution, mechanical testing, and determination of drug recovery from the MN arrays, in vitro drug delivery studies were undertaken, followed by HPLC analysis. All three drugs were successfully delivered in vitro across neonatal porcine skin, with similar permeation profiles achieved from both polymer formulations. An average of 126.3 ± 18.1 μg of atorvastatin calcium trihydrate was delivered, notably lower than the 687.9 ± 101.3 μg of lisinopril and 3924 ± 1011 μg of aspirin, because of the hydrophobic nature of the atorvastatin molecule and hence poor dissolution from the array. Polymer deposition into the skin may be an issue with repeat application of such a MN array, hence future work will consider more appropriate MN systems for continuous use, alongside tailoring delivery to less hydrophilic compounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Currently, there are no fast in vitro broad spectrum screening bioassays for the detection of marine toxins. The aim of this study was to develop such an assay. In gene expression profiling experiments 17 marker genes were provisionally selected that were differentially regulated in human intestinal Caco-2 cells upon exposure to the lipophilic shellfish poisons azaspiracid-1 (AZA1) or dinophysis toxin-1 (DTX1). These 17 genes together with two control genes were the basis for the design of a tailored microarray platform for the detection of these marine toxins and potentially others. Five out of the 17 selected marker genes on this dedicated DNA microarray gave dear signals, whereby the resulting fingerprints could be used to detect these toxins. CEACAM1, DDIT4, and TUBB3 were up-regulated by both AZA1 and DTX1, TRIB3 was up-regulated by AZA1 only, and OSR2 by DTX1 only. Analysis by singleplex qRT-PCR revealed the up- and down-regulation of the selected RGS16 and NPPB marker genes by DTX1, that were not envisioned by the new developed dedicated array. The qRT-PCR targeting the DDIT4, RSG16 and NPPB genes thus already resulted in a specific pattern for AZA1 and DTX1 indicating that for this specific case qRT-PCR might a be more suitable approach than a dedicated array.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OpenPMU is an open platform for the development of phasor measurement unit (PMU) technology. A need has been identified for an open-source alternative to commercial PMU devices tailored to the needs of the university researcher and for enabling the development of new synchrophasor instruments from this foundation. OpenPMU achieves this through open-source hardware design specifications and software source code, allowing duplicates of the OpenPMU to be fabricated under open-source licenses. This paper presents the OpenPMU device based on the Labview development environment. The device is performance tested according to the IEEE C37.118.1 standard. Compatibility with the IEEE C37.118.2 messaging format is achieved through middleware which is readily adaptable to other PMU projects or applications. Improvements have been made to the original design to increase its flexibility. A new modularized architecture for the OpenPMU is presented using an open messaging format which the authors propose is adopted as a platform for PMU research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We identified nine small-molecule hit compounds of Heat shock 70 kDa protein 5 (HSPA5) from cascade in silico screening based on the binding modes of the tetrapeptides derived from the peptide substrate or inhibitors of Escherichia coli HSP70. Two compounds exhibit promising inhibition activities from cancer cell viability and tumor inhibition assays. The binding modes of the hit compounds provide a platform for development of selective small molecule inhibitors of HSPA5. (C) 2013 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design cycle for complex special-purpose computing systems is extremely costly and time-consuming. It involves a multiparametric design space exploration for optimization, followed by design verification. Designers of special purpose VLSI implementations often need to explore parameters, such as optimal bitwidth and data representation, through time-consuming Monte Carlo simulations. A prominent example of this simulation-based exploration process is the design of decoders for error correcting systems, such as the Low-Density Parity-Check (LDPC) codes adopted by modern communication standards, which involves thousands of Monte Carlo runs for each design point. Currently, high-performance computing offers a wide set of acceleration options that range from multicore CPUs to Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). The exploitation of diverse target architectures is typically associated with developing multiple code versions, often using distinct programming paradigms. In this context, we evaluate the concept of retargeting a single OpenCL program to multiple platforms, thereby significantly reducing design time. A single OpenCL-based parallel kernel is used without modifications or code tuning on multicore CPUs, GPUs, and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL in order to introduce FPGAs as a potential platform to efficiently execute simulations coded in OpenCL. We use LDPC decoding simulations as a case study. Experimental results were obtained by testing a variety of regular and irregular LDPC codes that range from short/medium (e.g., 8,000 bit) to long length (e.g., 64,800 bit) DVB-S2 codes. We observe that, depending on the design parameters to be simulated, on the dimension and phase of the design, the GPU or FPGA may suit different purposes more conveniently, thus providing different acceleration factors over conventional multicore CPUs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Discrimination of different species in various target scopes within a single sensing platform can provide many advantages such as simplicity, rapidness, and cost effectiveness. Here we design a three-input colorimetric logic gate based on the aggregation and anti-aggregation of gold nanoparticles (Au NPs) for the sensing of melamine, cysteine, and Hg2+. The concept takes advantages of the highly specific coordination and ligand replacement reactions between melamine, cysteine, Hg2+, and Au NPs. Different outputs are obtained with the combinational inputs in the logic gates, which can serve as a reference to discriminate different analytes within a single sensing platform. Furthermore, besides the intrinsic sensitivity and selectivity of Au NPs to melamine-like compounds, the “INH” gates of melamine/cysteine and melamine/Hg2+ in this logic system can be employed for sensitive and selective detections of cysteine and Hg2+, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Randomised trials are at the heart of evidence-based healthcare, but the methods and infrastructure for conducting these sometimes complex studies are largely evidence free. Trial Forge (www.trialforge.org) is an initiative that aims to increase the evidence base for trial decision making and, in doing so, to improve trial efficiency.

This paper summarises a one-day workshop held in Edinburgh on 10 July 2014 to discuss Trial Forge and how to advance this initiative. We first outline the problem of inefficiency in randomised trials and go on to describe Trial Forge. We present participants' views on the processes in the life of a randomised trial that should be covered by Trial Forge.

General support existed at the workshop for the Trial Forge approach to increase the evidence base for making randomised trial decisions and for improving trial efficiency. Agreed upon key processes included choosing the right research question; logistical planning for delivery, training of staff, recruitment, and retention; data management and dissemination; and close down. The process of linking to existing initiatives where possible was considered crucial. Trial Forge will not be a guideline or a checklist but a 'go to' website for research on randomised trials methods, with a linked programme of applied methodology research, coupled to an effective evidence-dissemination process. Moreover, it will support an informal network of interested trialists who meet virtually (online) and occasionally in person to build capacity and knowledge in the design and conduct of efficient randomised trials.

Some of the resources invested in randomised trials are wasted because of limited evidence upon which to base many aspects of design, conduct, analysis, and reporting of clinical trials. Trial Forge will help to address this lack of evidence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Demand Side Management (DSM) plays an important role in Smart Grid. It has large scale access points, massive users, heterogeneous infrastructure and dispersive participants. Moreover, cloud computing which is a service model is characterized by resource on-demand, high reliability and large scale integration and so on and the game theory is a useful tool to the dynamic economic phenomena. In this study, a scheme design of cloud + end technology is proposed to solve technical and economic problems of the DSM. The architecture of cloud + end is designed to solve technical problems in the DSM. In particular, a construct model of cloud + end is presented to solve economic problems in the DSM based on game theories. The proposed method is tested on a DSM cloud + end public service system construction in a city of southern China. The results demonstrate the feasibility of these integrated solutions which can provide a reference for the popularization and application of the DSM in china.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Long-term hormone therapy has been the standard of care for advanced prostate cancer since the 1940s. STAMPEDE is a randomised controlled trial using a multiarm, multistage platform design. It recruits men with high-risk, locally advanced, metastatic or recurrent prostate cancer who are starting first-line long-term hormone therapy. We report primary survival results for three research comparisons testing the addition of zoledronic acid, docetaxel, or their combination to standard of care versus standard of care alone.

METHODS: Standard of care was hormone therapy for at least 2 years; radiotherapy was encouraged for men with N0M0 disease to November, 2011, then mandated; radiotherapy was optional for men with node-positive non-metastatic (N+M0) disease. Stratified randomisation (via minimisation) allocated men 2:1:1:1 to standard of care only (SOC-only; control), standard of care plus zoledronic acid (SOC + ZA), standard of care plus docetaxel (SOC + Doc), or standard of care with both zoledronic acid and docetaxel (SOC + ZA + Doc). Zoledronic acid (4 mg) was given for six 3-weekly cycles, then 4-weekly until 2 years, and docetaxel (75 mg/m(2)) for six 3-weekly cycles with prednisolone 10 mg daily. There was no blinding to treatment allocation. The primary outcome measure was overall survival. Pairwise comparisons of research versus control had 90% power at 2·5% one-sided α for hazard ratio (HR) 0·75, requiring roughly 400 control arm deaths. Statistical analyses were undertaken with standard log-rank-type methods for time-to-event data, with hazard ratios (HRs) and 95% CIs derived from adjusted Cox models. This trial is registered at ClinicalTrials.gov (NCT00268476) and ControlledTrials.com (ISRCTN78818544).

FINDINGS: 2962 men were randomly assigned to four groups between Oct 5, 2005, and March 31, 2013. Median age was 65 years (IQR 60-71). 1817 (61%) men had M+ disease, 448 (15%) had N+/X M0, and 697 (24%) had N0M0. 165 (6%) men were previously treated with local therapy, and median prostate-specific antigen was 65 ng/mL (IQR 23-184). Median follow-up was 43 months (IQR 30-60). There were 415 deaths in the control group (347 [84%] prostate cancer). Median overall survival was 71 months (IQR 32 to not reached) for SOC-only, not reached (32 to not reached) for SOC + ZA (HR 0·94, 95% CI 0·79-1·11; p=0·450), 81 months (41 to not reached) for SOC + Doc (0·78, 0·66-0·93; p=0·006), and 76 months (39 to not reached) for SOC + ZA + Doc (0·82, 0·69-0·97; p=0·022). There was no evidence of heterogeneity in treatment effect (for any of the treatments) across prespecified subsets. Grade 3-5 adverse events were reported for 399 (32%) patients receiving SOC, 197 (32%) receiving SOC + ZA, 288 (52%) receiving SOC + Doc, and 269 (52%) receiving SOC + ZA + Doc.

INTERPRETATION: Zoledronic acid showed no evidence of survival improvement and should not be part of standard of care for this population. Docetaxel chemotherapy, given at the time of long-term hormone therapy initiation, showed evidence of improved survival accompanied by an increase in adverse events. Docetaxel treatment should become part of standard of care for adequately fit men commencing long-term hormone therapy.

FUNDING: Cancer Research UK, Medical Research Council, Novartis, Sanofi-Aventis, Pfizer, Janssen, Astellas, NIHR Clinical Research Network, Swiss Group for Clinical Cancer Research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study introduces an inexact, but ultra-low power, computing architecture devoted to the embedded analysis of bio-signals. The platform operates at extremely low voltage supply levels to minimise energy consumption. In this scenario, the reliability of static RAM (SRAM) memories cannot be guaranteed when using conventional 6-transistor implementations. While error correction codes and dedicated SRAM implementations can ensure correct operations in this near-threshold regime, they incur in significant area and energy overheads, and should therefore be employed judiciously. Herein, the authors propose a novel scheme to design inexact computing architectures that selectively protects memory regions based on their significance, i.e. their impact on the end-to-end quality of service, as dictated by the bio-signal application characteristics. The authors illustrate their scheme on an industrial benchmark application performing the power spectrum analysis of electrocardiograms. Experimental evidence showcases that a significance-based memory protection approach leads to a small degradation in the output quality with respect to an exact implementation, while resulting in substantial energy gains, both in the memory and the processing subsystem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Exascale computation is the next target of high performance computing. In the push to create exascale computing platforms, simply increasing the number of hardware devices is not an acceptable option given the limitations of power consumption, heat dissipation, and programming models which are designed for current hardware platforms. Instead, new hardware technologies, coupled with improved programming abstractions and more autonomous runtime systems, are required to achieve this goal. This position paper presents the design of a new runtime for a new heterogeneous hardware platform being developed to explore energy efficient, high performance computing. By combining a number of different technologies, this framework will both simplify the programming of current and future HPC applications, as well as automating the scheduling of data and computation across this new hardware platform. In particular, this work explores the use of FPGAs to achieve both the power and performance goals of exascale, as well as utilising the runtime to automatically effect dynamic configuration and reconfiguration of these platforms. 

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Falls and fall-related injuries are symptomatic of an aging population. This study aimed to design, develop, and deliver a novel method of balance training, using an interactive game-based system to promote engagement, with the inclusion of older adults at both high and low risk of experiencing a fall.

STUDY DESIGN: Eighty-two older adults (65 years of age and older) were recruited from sheltered accommodation and local activity groups. Forty volunteers were randomly selected and received 5 weeks of balance game training (5 males, 35 females; mean, 77.18 ± 6.59 years), whereas the remaining control participants recorded levels of physical activity (20 males, 22 females; mean, 76.62 ± 7.28 years). The effect of balance game training was measured on levels of functional balance and balance confidence in individuals with and without quantifiable balance impairments.

RESULTS: Balance game training had a significant effect on levels of functional balance and balance confidence (P < 0.05). This was further demonstrated in participants who were deemed at high risk of falls. The overall pattern of results suggests the training program is effective and suitable for individuals at all levels of ability and may therefore play a role in reducing the risk of falls.

CONCLUSIONS: Commercial hardware can be modified to deliver engaging methods of effective balance assessment and training for the older population.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction
Standard treatment for neovascular age-related macular degeneration (nAMD) is intravitreal injections of anti-VEGF drugs. Following multiple injections, nAMD lesions often become quiescent but there is a high risk of reactivation, and regular review by hospital ophthalmologists is the norm. The present trial examines the feasibility of community optometrists making lesion reactivation decisions.

Methods
The Effectiveness of Community vs Hospital Eye Service (ECHoES) trial is a virtual trial; lesion reactivation decisions were made about vignettes that comprised clinical data, colour fundus photographs, and optical coherence tomograms displayed on a web-based platform. Participants were either hospital ophthalmologists or community optometrists. All participants were provided with webinar training on the disease, its management, and assessment of the retinal imaging outputs. In a balanced design, 96 participants each assessed 42 vignettes; a total of 288 vignettes were assessed seven times by each professional group.The primary outcome is a participant's judgement of lesion reactivation compared with a reference standard. Secondary outcomes are the frequency of sight threatening errors; judgements about specific lesion components; participant-rated confidence in their decisions about the primary outcome; cost effectiveness of follow-up by optometrists rather than ophthalmologists.

Discussion
This trial addresses an important question for the NHS, namely whether, with appropriate training, community optometrists can make retreatment decisions for patients with nAMD to the same standard as hospital ophthalmologists. The trial employed a novel approach as participation was entirely through a web-based application; the trial required very few resources compared with those that would have been needed for a conventional randomised controlled clinical trial.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

FPGAs and GPUs are often used when real-time performance in video processing is required. An accelerated processor is chosen based on task-specific priorities (power consumption, processing time and detection accuracy), and this decision is normally made once at design time. All three characteristics are important, particularly in battery-powered systems. Here we propose a method for moving selection of processing platform from a single design-time choice to a continuous run time one.We implement Histogram of Oriented Gradients (HOG) detectors for cars and people and Mixture of Gaussians (MoG) motion detectors running across FPGA, GPU and CPU in a heterogeneous system. We use this to detect illegally parked vehicles in urban scenes. Power, time and accuracy information for each detector is characterised. An anomaly measure is assigned to each detected object based on its trajectory and location, when compared to learned contextual movement patterns. This drives processor and implementation selection, so that scenes with high behavioural anomalies are processed with faster but more power hungry implementations, but routine or static time periods are processed with power-optimised, less accurate, slower versions. Real-time performance is evaluated on video datasets including i-LIDS. Compared to power-optimised static selection, automatic dynamic implementation mapping is 10% more accurate but draws 12W extra power in our testbed desktop system.