82 resultados para IoT platforms
Resumo:
The last decade has seen successful clinical application of polymer–protein conjugates (e.g. Oncaspar, Neulasta) and promising results in clinical trials with polymer–anticancer drug conjugates. This, together with the realisation that nanomedicines may play an important future role in cancer diagnosis and treatment, has increased interest in this emerging field. More than 10 anticancer conjugates have now entered clinical development. Phase I/II clinical trials involving N-(2-hydroxypropyl)methacrylamide (HPMA) copolymer-doxorubicin (PK1; FCE28068) showed a four- to fivefold reduction in anthracycline-related toxicity, and, despite cumulative doses up to 1680 mg/m2 (doxorubicin equivalent), no cardiotoxicity was observed. Antitumour activity in chemotherapy-resistant/refractory patients (including breast cancer) was also seen at doxorubicin doses of 80–320 mg/m2, consistent with tumour targeting by the enhanced permeability (EPR) effect. Hints, preclinical and clinical, that polymer anthracycline conjugation can bypass multidrug resistance (MDR) reinforce our hope that polymer drugs will prove useful in improving treatment of endocrine-related cancers. These promising early clinical results open the possibility of using the water-soluble polymers as platforms for delivery of a cocktail of pendant drugs. In particular, we have recently described the first conjugates to combine endocrine therapy and chemotherapy. Their markedly enhanced in vitro activity encourages further development of such novel, polymer-based combination therapies. This review briefly describes the current status of polymer therapeutics as anticancer agents, and discusses the opportunities for design of second-generation, polymer-based combination therapy, including the cocktail of agents that will be needed to treat resistant metastatic cancer.
Resumo:
Measurements of the electrical characteristics of the atmosphere above the surface have been made for over 200 years, from a variety of different platforms, including kites, balloons, rockets and aircraft. From these measurements, a great deal of information about the electrical characteristics of the atmosphere has been gained, assisting our understanding of the global atmospheric electric circuit, thunderstorm electrification and lightning generation mechanisms, discovery of transient luminous events above thunderstorms, and many other electrical phenomena. This paper surveys the history of atmospheric electrical measurements aloft, from the earliest manned balloon ascents to current day observations with free balloons and aircraft. Measurements of atmospheric electrical parameters in a range of meteorological conditions are described, including clear air conditions, polluted conditions, non-thunderstorm clouds, and thunderstorm clouds, spanning a range of atmospheric conditions, from fair weather, to the most electrically active.
Resumo:
The ability to create accurate geometric models of neuronal morphology is important for understanding the role of shape in information processing. Despite a significant amount of research on automating neuron reconstructions from image stacks obtained via microscopy, in practice most data are still collected manually. This paper describes Neuromantic, an open source system for three dimensional digital tracing of neurites. Neuromantic reconstructions are comparable in quality to those of existing commercial and freeware systems while balancing speed and accuracy of manual reconstruction. The combination of semi-automatic tracing, intuitive editing, and ability of visualizing large image stacks on standard computing platforms provides a versatile tool that can help address the reconstructions availability bottleneck. Practical considerations for reducing the computational time and space requirements of the extended algorithm are also discussed.
Resumo:
The increasing use of social media, applications or platforms that allow users to interact online, ensures that this environment will provide a useful source of evidence for the forensics examiner. Current tools for the examination of digital evidence find this data problematic as they are not designed for the collection and analysis of online data. Therefore, this paper presents a framework for the forensic analysis of user interaction with social media. In particular, it presents an inter-disciplinary approach for the quantitative analysis of user engagement to identify relational and temporal dimensions of evidence relevant to an investigation. This framework enables the analysis of large data sets from which a (much smaller) group of individuals of interest can be identified. In this way, it may be used to support the identification of individuals who might be ‘instigators’ of a criminal event orchestrated via social media, or a means of potentially identifying those who might be involved in the ‘peaks’ of activity. In order to demonstrate the applicability of the framework, this paper applies it to a case study of actors posting to a social media Web site.
Resumo:
A new record of sea surface temperature (SST) for climate applications is described. This record provides independent corroboration of global variations estimated from SST measurements made in situ. Infrared imagery from Along-Track Scanning Radiometers (ATSRs) is used to create a 20 year time series of SST at 0.1° latitude-longitude resolution, in the ATSR Reprocessing for Climate (ARC) project. A very high degree of independence of in situ measurements is achieved via physics-based techniques. Skin SST and SST estimated for 20 cm depth are provided, with grid cell uncertainty estimates. Comparison with in situ data sets establishes that ARC SSTs generally have bias of order 0.1 K or smaller. The precision of the ARC SSTs is 0.14 K during 2003 to 2009, from three-way error analysis. Over the period 1994 to 2010, ARC SSTs are stable, with better than 95% confidence, to within 0.005 K yr−1(demonstrated for tropical regions). The data set appears useful for cleanly quantifying interannual variability in SST and major SST anomalies. The ARC SST global anomaly time series is compared to the in situ-based Hadley Centre SST data set version 3 (HadSST3). Within known uncertainties in bias adjustments applied to in situ measurements, the independent ARC record and HadSST3 present the same variations in global marine temperature since 1996. Since the in situ observing system evolved significantly in its mix of measurement platforms and techniques over this period, ARC SSTs provide an important corroboration that HadSST3 accurately represents recent variability and change in this essential climate variable.
Resumo:
We propose and demonstrate a fully probabilistic (Bayesian) approach to the detection of cloudy pixels in thermal infrared (TIR) imagery observed from satellite over oceans. Using this approach, we show how to exploit the prior information and the fast forward modelling capability that are typically available in the operational context to obtain improved cloud detection. The probability of clear sky for each pixel is estimated by applying Bayes' theorem, and we describe how to apply Bayes' theorem to this problem in general terms. Joint probability density functions (PDFs) of the observations in the TIR channels are needed; the PDFs for clear conditions are calculable from forward modelling and those for cloudy conditions have been obtained empirically. Using analysis fields from numerical weather prediction as prior information, we apply the approach to imagery representative of imagers on polar-orbiting platforms. In comparison with the established cloud-screening scheme, the new technique decreases both the rate of failure to detect cloud contamination and the false-alarm rate by one quarter. The rate of occurrence of cloud-screening-related errors of >1 K in area-averaged SSTs is reduced by 83%. Copyright © 2005 Royal Meteorological Society.
Resumo:
We have optimised the atmospheric radiation algorithm of the FAMOUS climate model on several hardware platforms. The optimisation involved translating the Fortran code to C and restructuring the algorithm around the computation of a single air column. Instead of the existing MPI-based domain decomposition, we used a task queue and a thread pool to schedule the computation of individual columns on the available processors. Finally, four air columns are packed together in a single data structure and computed simultaneously using Single Instruction Multiple Data operations. The modified algorithm runs more than 50 times faster on the CELL’s Synergistic Processing Elements than on its main PowerPC processing element. On Intel-compatible processors, the new radiation code runs 4 times faster. On the tested graphics processor, using OpenCL, we find a speed-up of more than 2.5 times as compared to the original code on the main CPU. Because the radiation code takes more than 60% of the total CPU time, FAMOUS executes more than twice as fast. Our version of the algorithm returns bit-wise identical results, which demonstrates the robustness of our approach. We estimate that this project required around two and a half man-years of work.
Resumo:
Hydrophilic interaction chromatography–mass spectrometry (HILIC–MS) was used for anionic metabolic profiling of urine from antibiotic-treated rats to study microbial–host co-metabolism. Rats were treated with the antibiotics penicillin G and streptomycin sulfate for four or eight days and compared to a control group. Urine samples were collected at day zero, four and eight, and analyzed by HILIC–MS. Multivariate data analysis was applied to the urinary metabolic profiles to identify biochemical variation between the treatment groups. Principal component analysis found a clear distinction between those animals receiving antibiotics and the control animals, with twenty-nine discriminatory compounds of which twenty were down-regulated and nine up-regulated upon treatment. In the treatment group receiving antibiotics for four days, a recovery effect was observed for seven compounds after cessation of antibiotic administration. Thirteen discriminatory compounds could be putatively identified based on their accurate mass, including aconitic acid, benzenediol sulfate, ferulic acid sulfate, hippuric acid, indoxyl sulfate, penicillin G, phenol and vanillin 4-sulfate. The rat urine samples had previously been analyzed by capillary electrophoresis (CE) with MS detection and proton nuclear magnetic resonance (1H NMR) spectroscopy. Using CE–MS and 1H NMR spectroscopy seventeen and twenty-five discriminatory compounds were found, respectively. Both hippuric acid and indoxyl sulfate were detected across all three platforms. Additionally, eight compounds were observed with both HILIC–MS and CE–MS. Overall, HILIC–MS appears to be highly complementary to CE–MS and 1H NMR spectroscopy, identifying additional compounds that discriminate the urine samples from antibiotic-treated and control rats.
Resumo:
SOA (Service Oriented Architecture), workflow, the Semantic Web, and Grid computing are key enabling information technologies in the development of increasingly sophisticated e-Science infrastructures and application platforms. While the emergence of Cloud computing as a new computing paradigm has provided new directions and opportunities for e-Science infrastructure development, it also presents some challenges. Scientific research is increasingly finding that it is difficult to handle “big data” using traditional data processing techniques. Such challenges demonstrate the need for a comprehensive analysis on using the above mentioned informatics techniques to develop appropriate e-Science infrastructure and platforms in the context of Cloud computing. This survey paper describes recent research advances in applying informatics techniques to facilitate scientific research particularly from the Cloud computing perspective. Our particular contributions include identifying associated research challenges and opportunities, presenting lessons learned, and describing our future vision for applying Cloud computing to e-Science. We believe our research findings can help indicate the future trend of e-Science, and can inform funding and research directions in how to more appropriately employ computing technologies in scientific research. We point out the open research issues hoping to spark new development and innovation in the e-Science field.
Resumo:
Background: Massive Open Online Courses (MOOCs) have become immensely popular in a short span of time. However, there is very little research exploring MOOCs in the discipline of Health and Medicine. This paper is aimed to fill this void by providing a review of Health and Medicine related MOOCs. Objective: Provide a review of Health and Medicine related MOOCs offered by various MOOC platforms within the year 2013. Analyze and compare the various offerings, their target audience, typical length of a course and credentials offered. Discuss opportunities and challenges presented by MOOCs in the discipline of Health and Medicine. Methods: Health and Medicine related MOOCs were gathered using several methods to ensure the richness and completeness of data. Identified MOOC platform websites were used to gather the lists of offerings. In parallel, these MOOC platforms were contacted to access official data on their offerings. Two MOOC aggregator sites (Class Central and MOOC List) were also consulted to gather data on MOOC offerings. Eligibility criteria were defined to concentrate on the courses that were offered in 2013 and primarily on the subject ‘Health and Medicine’. All language translations in this paper were achieved using Google Translate. Results: The search identified 225 courses out of which 98 were eligible for the review (n = 98). 58% (57) of the MOOCs considered were offered on the Coursera platform and 94% (92) of all the MOOCs were offered in English. 90 MOOCs were offered by universities and the John Hopkins University offered the largest number of MOOCs (12). Only three MOOCs were offered by developing countries (China, West Indies, and Saudi Arabia). The duration of MOOCs varied from three weeks to 20 weeks with an average length of 6.7 weeks. On average MOOCs expected a participant to work on the material for 4.2 hours a week. Verified Certificates were offered by 14 MOOCs while three others offered other professional recognition. Conclusions: The review presents evidence to suggest that MOOCs can be used as a way to provide continuous medical education. It also shows the potential of MOOCs as a means of increasing health literacy among the public.
Resumo:
Observations of atmospheric conditions and processes in citiesare fundamental to understanding the interactions between the urban surface and weather/climate, improving the performance of urban weather, air quality and climate models, and providing key information for city end-users (e.g. decision-makers, stakeholders, public). In this paper, Shanghai's urban integrated meteorological observation network (SUIMON) and some examples of intended applications are introduced. Its characteristics include being: multi- purpose (e.g. forecast, research, service), multi-function (high impact weather, city climate, special end-users), multi-scale (e.g. macro/meso-, urban-, neighborhood, street canyon), multi-variable (e.g. thermal, dynamic, chemical, bio-meteorological, ecological), and multi- platform (e.g. radar, wind profiler, ground-based, satellite based, in-situ observation/ sampling). Underlying SUIMON is a data management system to facilitate exchange of data and information. The overall aim of the network is to improve coordination strategies and instruments; to identify data gaps based on science and user driven requirements; and to intelligently combine observations from a variety of platforms by using a data assimilation system that is tuned to produce the best estimate of the current state of the urban atmosphere.
Resumo:
As the fidelity of virtual environments (VE) continues to increase, the possibility of using them as training platforms is becoming increasingly realistic for a variety of application domains, including military and emergency personnel training. In the past, there was much debate on whether the acquisition and subsequent transfer of spatial knowledge from VEs to the real world is possible, or whether the differences in medium during training would essentially be an obstacle to truly learning geometric space. In this paper, the authors present various cognitive and environmental factors that not only contribute to this process, but also interact with each other to a certain degree, leading to a variable exposure time requirement in order for the process of spatial knowledge acquisition (SKA) to occur. The cognitive factors that the authors discuss include a variety of individual user differences such as: knowledge and experience; cognitive gender differences; aptitude and spatial orientation skill; and finally, cognitive styles. Environmental factors discussed include: Size, Spatial layout complexity and landmark distribution. It may seem obvious that since every individual's brain is unique - not only through experience, but also through genetic predisposition that a one size fits all approach to training would be illogical. Furthermore, considering that various cognitive differences may further emerge when a certain stimulus is present (e.g. complex environmental space), it would make even more sense to understand how these factors can impact spatial memory, and to try to adapt the training session by providing visual/auditory cues as well as by changing the exposure time requirements for each individual. The impact of this research domain is important to VE training in general, however within service and military domains, guaranteeing appropriate spatial training is critical in order to ensure that disorientation does not occur in a life or death scenario.
Resumo:
Massive Open Online Courses (MOOCs) are a new addition to the open educational provision. They are offered mainly by prestigious universities on various commercial and non-commercial MOOC platforms allowing anyone who is interested to experience the world class teaching practiced in these universities. MOOCs have attracted wide interest from around the world. However, learner demographics in MOOCs suggest that some demographic groups are underrepresented. At present MOOCs seem to be better serving the continuous professional development sector.
Resumo:
The past years have shown an enormous advancement in sequencing and array-based technologies, producing supplementary or alternative views of the genome stored in various formats and databases. Their sheer volume and different data scope pose a challenge to jointly visualize and integrate diverse data types. We present AmalgamScope a new interactive software tool focusing on assisting scientists with the annotation of the human genome and particularly the integration of the annotation files from multiple data types, using gene identifiers and genomic coordinates. Supported platforms include next-generation sequencing and microarray technologies. The available features of AmalgamScope range from the annotation of diverse data types across the human genome to integration of the data based on the annotational information and visualization of the merged files within chromosomal regions or the whole genome. Additionally, users can define custom transcriptome library files for any species and use the file exchanging distant server options of the tool.
Resumo:
Human brain imaging techniques, such as Magnetic Resonance Imaging (MRI) or Diffusion Tensor Imaging (DTI), have been established as scientific and diagnostic tools and their adoption is growing in popularity. Statistical methods, machine learning and data mining algorithms have successfully been adopted to extract predictive and descriptive models from neuroimage data. However, the knowledge discovery process typically requires also the adoption of pre-processing, post-processing and visualisation techniques in complex data workflows. Currently, a main problem for the integrated preprocessing and mining of MRI data is the lack of comprehensive platforms able to avoid the manual invocation of preprocessing and mining tools, that yields to an error-prone and inefficient process. In this work we present K-Surfer, a novel plug-in of the Konstanz Information Miner (KNIME) workbench, that automatizes the preprocessing of brain images and leverages the mining capabilities of KNIME in an integrated way. K-Surfer supports the importing, filtering, merging and pre-processing of neuroimage data from FreeSurfer, a tool for human brain MRI feature extraction and interpretation. K-Surfer automatizes the steps for importing FreeSurfer data, reducing time costs, eliminating human errors and enabling the design of complex analytics workflow for neuroimage data by leveraging the rich functionalities available in the KNIME workbench.