932 resultados para Interoperability of Applications
Resumo:
This paper considers left-invariant control systems defined on the Lie groups SU(2) and SO(3). Such systems have a number of applications in both classical and quantum control problems. The purpose of this paper is two-fold. Firstly, the optimal control problem for a system varying on these Lie Groups, with cost that is quadratic in control is lifted to their Hamiltonian vector fields through the Maximum principle of optimal control and explicitly solved. Secondly, the control systems are integrated down to the level of the group to give the solutions for the optimal paths corresponding to the optimal controls. In addition it is shown here that integrating these equations on the Lie algebra su(2) gives simpler solutions than when these are integrated on the Lie algebra so(3).
Resumo:
Generalizing the notion of an eigenvector, invariant subspaces are frequently used in the context of linear eigenvalue problems, leading to conceptually elegant and numerically stable formulations in applications that require the computation of several eigenvalues and/or eigenvectors. Similar benefits can be expected for polynomial eigenvalue problems, for which the concept of an invariant subspace needs to be replaced by the concept of an invariant pair. Little has been known so far about numerical aspects of such invariant pairs. The aim of this paper is to fill this gap. The behavior of invariant pairs under perturbations of the matrix polynomial is studied and a first-order perturbation expansion is given. From a computational point of view, we investigate how to best extract invariant pairs from a linearization of the matrix polynomial. Moreover, we describe efficient refinement procedures directly based on the polynomial formulation. Numerical experiments with matrix polynomials from a number of applications demonstrate the effectiveness of our extraction and refinement procedures.
Resumo:
This paper introduces a new fast, effective and practical model structure construction algorithm for a mixture of experts network system utilising only process data. The algorithm is based on a novel forward constrained regression procedure. Given a full set of the experts as potential model bases, the structure construction algorithm, formed on the forward constrained regression procedure, selects the most significant model base one by one so as to minimise the overall system approximation error at each iteration, while the gate parameters in the mixture of experts network system are accordingly adjusted so as to satisfy the convex constraints required in the derivation of the forward constrained regression procedure. The procedure continues until a proper system model is constructed that utilises some or all of the experts. A pruning algorithm of the consequent mixture of experts network system is also derived to generate an overall parsimonious construction algorithm. Numerical examples are provided to demonstrate the effectiveness of the new algorithms. The mixture of experts network framework can be applied to a wide variety of applications ranging from multiple model controller synthesis to multi-sensor data fusion.
Resumo:
The layer-by-layer deposition of polymers onto surfaces allows the fabrication of multilayered materials for a wide range of applications, from drug delivery to biosensors. This work describes the analysis of complex formation between poly(acrylic acid) and methylcellulose in aqueous solutions using Biacore, a surface plasmon resonance analytical technique, traditionally used to examine biological interactions. This technique characterized the layer-by-layer deposition of these polymers on the surface of a Biacore sensor chip. The results were subsequently used to optimize the experimental conditions for sequential layer deposition on glass slides. The role of the solution pH and poly(acrylic acid) molecular weight on the formation of interpolymer multilayered coatings was researched, and showed that the optimal deposition of the polymer complexes was achieved at pHs ≤2.5 with a poly(acrylic acid) molecular weight of 450 kDa.
Resumo:
BACKGROUND: The single nucleotide polymorphism (SNP), and consequent amino acid exchange from tyrosine to cysteine at location 139 of the vkorc1 gene (i.e. tyrosine139cysteine or Y139C), is the most widespread anticoagulant resistance mutation in Norway rats (Rattus norvegicus Berk.) in Europe. Field trials were conducted to determine incidence of the Y139C SNP at two rat infested farms in Westphalia, Germany, and to estimate the practical efficacy against them of applications, using a pulsed baiting treatment regime, of a proprietary bait (KleratTM) containing 50 ppm brodifacoum. RESULTS: DNA analysis for the Y139C mutation showed that resistant rats were prevalent at the two farms, with an incidence of 80.0% and 78.6% respectively. Applications of brodifacoum bait achieved results of 99.2% and 100.0% control at the two farms, when measured by census baiting, although the treatment was somewhat prolonged at one site due to the abundance of attractive alternative food. CONCLUSION: The study showed that 50 ppm brodifacoum bait is fully effective against the Y139C SNP at the Münsterland focus and is likely to be so elsewhere in Europe where this mutation is found. The pulsed baiting regime reduced to relatively low levels the quantity of bait required to control these two substantial resistant Norway rat infestations. Previous studies had shown much larger quantities of bromadiolone and difenacoum baits used in ineffective treatments against Y139C resistant rats in the Münsterland. These results should be considered when making decisions about the use of anticoagulant against resistant Norway rats and their potential environmental impacts.
Resumo:
The addition of small quantities of nanoparticles to conventional and sustainable thermoplastics leads to property enhancements with considerable potential in many areas of applications including food packaging 1, lightweight composites and high performance materials 2. In the case of sustainable polymers 3, the addition of nanoparticles may well sufficiently enhance properties such that the portfolio of possible applications is greatly increased. Most engineered nanoparticles are highly stable and these exist as nanoparticles prior to compounding with the polymer resin. They remain as nanoparticles during the active use of the packaging material as well as in the subsequent waste and recycling streams. It is also possible to construct the nanoparticles within the polymer films during processing from organic compounds selected to present minimal or no potential health hazards 4. In both cases the characterisation of the resultant nanostructured polymers presents a number of challenges. Foremost amongst these are the coupled challenges of the nanoscale of the particles and the low fraction present in the polymer matrix. Very low fractions of nanoparticles are only effective if the dispersion of the particles is good. This continues to be an issue in the process engineering but of course bad dispersion is much easier to see than good dispersion. In this presentation we show the merits of a combined scattering (neutron and x-ray) and microscopy (SEM, TEM, AFM) approach. We explore this methodology using rod like, plate like and spheroidal particles including metallic particles, plate-like and rod-like clay dispersions and nanoscale particles based on carbon such as nanotubes and graphene flakes. We will draw on a range of material systems, many explored in partnership with other members of Napolynet. The value of adding nanoscale particles is that the scale matches the scale of the structure in the polymer matrix. Although this can lead to difficulties in separating the effects in scattering experiments, the result in morphological studies means that both the nanoparticles and the polymer morphology are revealed.
Resumo:
Global NDVI data are routinely derived from the AVHRR, SPOT-VGT, and MODIS/Terra earth observation records for a range of applications from terrestrial vegetation monitoring to climate change modeling. This has led to a substantial interest in the harmonization of multisensor records. Most evaluations of the internal consistency and continuity of global multisensor NDVI products have focused on time-series harmonization in the spectral domain, often neglecting the spatial domain. We fill this void by applying variogram modeling (a) to evaluate the differences in spatial variability between 8-km AVHRR, 1-km SPOT-VGT, and 1-km, 500-m, and 250-m MODIS NDVI products over eight EOS (Earth Observing System) validation sites, and (b) to characterize the decay of spatial variability as a function of pixel size (i.e. data regularization) for spatially aggregated Landsat ETM+ NDVI products and a real multisensor dataset. First, we demonstrate that the conjunctive analysis of two variogram properties – the sill and the mean length scale metric – provides a robust assessment of the differences in spatial variability between multiscale NDVI products that are due to spatial (nominal pixel size, point spread function, and view angle) and non-spatial (sensor calibration, cloud clearing, atmospheric corrections, and length of multi-day compositing period) factors. Next, we show that as the nominal pixel size increases, the decay of spatial information content follows a logarithmic relationship with stronger fit value for the spatially aggregated NDVI products (R2 = 0.9321) than for the native-resolution AVHRR, SPOT-VGT, and MODIS NDVI products (R2 = 0.5064). This relationship serves as a reference for evaluation of the differences in spatial variability and length scales in multiscale datasets at native or aggregated spatial resolutions. The outcomes of this study suggest that multisensor NDVI records cannot be integrated into a long-term data record without proper consideration of all factors affecting their spatial consistency. Hence, we propose an approach for selecting the spatial resolution, at which differences in spatial variability between NDVI products from multiple sensors are minimized. This approach provides practical guidance for the harmonization of long-term multisensor datasets.
Resumo:
A better understanding of links between the properties of the urban environment and the exchange to the atmosphere is central to a wide range of applications. The numerous measurements of surface energy balance data in urban areas enable intercomparison of observed fluxes from distinct environments. This study analyzes a large database in two new ways. First, instead of normalizing fluxes using net all-wave radiation only the incoming radiative fluxes are used, to remove the surface attributes from the denominator. Second, because data are now available year-round, indices are developed to characterize the fraction of the surface (built; vegetation) actively engaged in energy exchanges. These account for shading patterns within city streets and seasonal changes in vegetation phenology; their impact on the partitioning of the incoming radiation is analyzed. Data from 19 sites in North America, Europe, Africa, and Asia (including 6-yr-long observation campaigns) are used to derive generalized surface–flux relations. The midday-period outgoing radiative fraction decreases with an increasing total active surface index, the stored energy fraction increases with an active built index, and the latent heat fraction increases with an active vegetated index. Parameterizations of these energy exchange ratios as a function of the surface indices [i.e., the Flux Ratio–Active Index Surface Exchange (FRAISE) scheme] are developed. These are used to define four urban zones that characterize energy partitioning on the basis of their active surface indices. An independent evaluation of FRAISE, using three additional sites from the Basel Urban Boundary Layer Experiment (BUBBLE), yields accurate predictions of the midday flux partitioning at each location.
Resumo:
For an increasing number of applications, mesoscale modelling systems now aim to better represent urban areas. The complexity of processes resolved by urban parametrization schemes varies with the application. The concept of fitness-for-purpose is therefore critical for both the choice of parametrizations and the way in which the scheme should be evaluated. A systematic and objective model response analysis procedure (Multiobjective Shuffled Complex Evolution Metropolis (MOSCEM) algorithm) is used to assess the fitness of the single-layer urban canopy parametrization implemented in the Weather Research and Forecasting (WRF) model. The scheme is evaluated regarding its ability to simulate observed surface energy fluxes and the sensitivity to input parameters. Recent amendments are described, focussing on features which improve its applicability to numerical weather prediction, such as a reduced and physically more meaningful list of input parameters. The study shows a high sensitivity of the scheme to parameters characterizing roof properties in contrast to a low response to road-related ones. Problems in partitioning of energy between turbulent sensible and latent heat fluxes are also emphasized. Some initial guidelines to prioritize efforts to obtain urban land-cover class characteristics in WRF are provided. Copyright © 2010 Royal Meteorological Society and Crown Copyright.
Resumo:
The aim of this article is to improve the communication of the probabilistic flood forecasts generated by hydrological ensemble prediction systems (HEPS) by understanding perceptions of different methods of visualizing probabilistic forecast information. This study focuses on interexpert communication and accounts for differences in visualization requirements based on the information content necessary for individual users. The perceptions of the expert group addressed in this study are important because they are the designers and primary users of existing HEPS. Nevertheless, they have sometimes resisted the release of uncertainty information to the general public because of doubts about whether it can be successfully communicated in ways that would be readily understood to nonexperts. In this article, we explore the strengths and weaknesses of existing HEPS visualization methods and thereby formulate some wider recommendations about the best practice for HEPS visualization and communication. We suggest that specific training on probabilistic forecasting would foster use of probabilistic forecasts with a wider range of applications. The result of a case study exercise showed that there is no overarching agreement between experts on how to display probabilistic forecasts and what they consider the essential information that should accompany plots and diagrams. In this article, we propose a list of minimum properties that, if consistently displayed with probabilistic forecasts, would make the products more easily understandable. Copyright © 2012 John Wiley & Sons, Ltd.
Resumo:
With a wide range of applications benefiting from dense network air temperature observations but with limitations of costs, existing siting guidelines and risk of damage to sensors, new methods are required to gain a high resolution understanding of the spatio-temporal patterns of urban meteorological phenomena such as the urban heat island or precision farming needs. With the launch of a new generation of low cost sensors it is possible to deploy a network to monitor air temperature at finer spatial resolutions. Here we investigate the Aginova Sentinel Micro (ASM) sensor with a bespoke radiation shield (together < US$150) which can provide secure near-real-time air temperature data to a server utilising existing (or user deployed) Wireless Fidelity (Wi-Fi) networks. This makes it ideally suited for deployment where wireless communications readily exist, notably urban areas. Assessment of the performance of the ASM relative to traceable standards in a water bath and atmospheric chamber show it to have good measurement accuracy with mean errors < ± 0.22 °C between -25 and 30 °C, with a time constant in ambient air of 110 ± 15 s. Subsequent field tests of it within the bespoke shield also had excellent performance (root-mean-square error = 0.13 °C) over a range of meteorological conditions relative to a traceable operational UK Met Office platinum resistance thermometer. These results indicate that the ASM and bespoke shield are more than fit-for-purpose for dense network deployment in urban areas at relatively low cost compared to existing observation techniques.
Resumo:
Climate data are used in a number of applications including climate risk management and adaptation to climate change. However, the availability of climate data, particularly throughout rural Africa, is very limited. Available weather stations are unevenly distributed and mainly located along main roads in cities and towns. This imposes severe limitations to the availability of climate information and services for the rural community where, arguably, these services are needed most. Weather station data also suffer from gaps in the time series. Satellite proxies, particularly satellite rainfall estimate, have been used as alternatives because of their availability even over remote parts of the world. However, satellite rainfall estimates also suffer from a number of critical shortcomings that include heterogeneous time series, short time period of observation, and poor accuracy particularly at higher temporal and spatial resolutions. An attempt is made here to alleviate these problems by combining station measurements with the complete spatial coverage of satellite rainfall estimates. Rain gauge observations are merged with a locally calibrated version of the TAMSAT satellite rainfall estimates to produce over 30-years (1983-todate) of rainfall estimates over Ethiopia at a spatial resolution of 10 km and a ten-daily time scale. This involves quality control of rain gauge data, generating locally calibrated version of the TAMSAT rainfall estimates, and combining these with rain gauge observations from national station network. The infrared-only satellite rainfall estimates produced using a relatively simple TAMSAT algorithm performed as good as or even better than other satellite rainfall products that use passive microwave inputs and more sophisticated algorithms. There is no substantial difference between the gridded-gauge and combined gauge-satellite products over the test area in Ethiopia having a dense station network; however, the combined product exhibits better quality over parts of the country where stations are sparsely distributed.
Resumo:
In this article, we review the state-of-the-art techniques in mining data streams for mobile and ubiquitous environments. We start the review with a concise background of data stream processing, presenting the building blocks for mining data streams. In a wide range of applications, data streams are required to be processed on small ubiquitous devices like smartphones and sensor devices. Mobile and ubiquitous data mining target these applications with tailored techniques and approaches addressing scarcity of resources and mobility issues. Two categories can be identified for mobile and ubiquitous mining of streaming data: single-node and distributed. This survey will cover both categories. Mining mobile and ubiquitous data require algorithms with the ability to monitor and adapt the working conditions to the available computational resources. We identify the key characteristics of these algorithms and present illustrative applications. Distributed data stream mining in the mobile environment is then discussed, presenting the Pocket Data Mining framework. Mobility of users stimulates the adoption of context-awareness in this area of research. Context-awareness and collaboration are discussed in the Collaborative Data Stream Mining, where agents share knowledge to learn adaptive accurate models.
Resumo:
Polymers with the ability to heal themselves could provide access to materials with extended lifetimes in a wide range of applications such as surface coatings, automotive components and aerospace composites. Here we describe the synthesis and characterisation of two novel, stimuli-responsive, supramolecular polymer blends based on π-electron-rich pyrenyl residues and π-electron-deficient, chain-folding aromatic diimides that interact through complementary π–π stacking interactions. Different degrees of supramolecular “cross-linking” were achieved by use of divalent or trivalent poly(ethylene glycol)-based polymers featuring pyrenyl end-groups, blended with a known diimide–ether copolymer. The mechanical properties of the resulting polymer blends revealed that higher degrees of supramolecular “cross-link density” yield materials with enhanced mechanical properties, such as increased tensile modulus, modulus of toughness, elasticity and yield point. After a number of break/heal cycles, these materials were found to retain the characteristics of the pristine polymer blend, and this new approach thus offers a simple route to mechanically robust yet healable materials.
Resumo:
Smart healthcare is a complex domain for systems integration due to human and technical factors and heterogeneous data sources involved. As a part of smart city, it is such a complex area where clinical functions require smartness of multi-systems collaborations for effective communications among departments, and radiology is one of the areas highly relies on intelligent information integration and communication. Therefore, it faces many challenges regarding integration and its interoperability such as information collision, heterogeneous data sources, policy obstacles, and procedure mismanagement. The purpose of this study is to conduct an analysis of data, semantic, and pragmatic interoperability of systems integration in radiology department, and to develop a pragmatic interoperability framework for guiding the integration. We select an on-going project at a local hospital for undertaking our case study. The project is to achieve data sharing and interoperability among Radiology Information Systems (RIS), Electronic Patient Record (EPR), and Picture Archiving and Communication Systems (PACS). Qualitative data collection and analysis methods are used. The data sources consisted of documentation including publications and internal working papers, one year of non-participant observations and 37 interviews with radiologists, clinicians, directors of IT services, referring clinicians, radiographers, receptionists and secretary. We identified four primary phases of data analysis process for the case study: requirements and barriers identification, integration approach, interoperability measurements, and knowledge foundations. Each phase is discussed and supported by qualitative data. Through the analysis we also develop a pragmatic interoperability framework that summaries the empirical findings and proposes recommendations for guiding the integration in the radiology context.