868 resultados para end user modes of operation
Resumo:
The last 30 years have seen Fuzzy Logic (FL) emerging as a method either complementing or challenging stochastic methods as the traditional method of modelling uncertainty. But the circumstances under which FL or stochastic methods should be used are shrouded in disagreement, because the areas of application of statistical and FL methods are overlapping with differences in opinion as to when which method should be used. Lacking are practically relevant case studies comparing these two methods. This work compares stochastic and FL methods for the assessment of spare capacity on the example of pharmaceutical high purity water (HPW) utility systems. The goal of this study was to find the most appropriate method modelling uncertainty in industrial scale HPW systems. The results provide evidence which suggests that stochastic methods are superior to the methods of FL in simulating uncertainty in chemical plant utilities including HPW systems in typical cases whereby extreme events, for example peaks in demand, or day-to-day variation rather than average values are of interest. The average production output or other statistical measures may, for instance, be of interest in the assessment of workshops. Furthermore the results indicate that the stochastic model should be used only if found necessary by a deterministic simulation. Consequently, this thesis concludes that either deterministic or stochastic methods should be used to simulate uncertainty in chemical plant utility systems and by extension some process system because extreme events or the modelling of day-to-day variation are important in capacity extension projects. Other reasons supporting the suggestion that stochastic HPW models are preferred to FL HPW models include: 1. The computer code for stochastic models is typically less complex than a FL models, thus reducing code maintenance and validation issues. 2. In many respects FL models are similar to deterministic models. Thus the need for a FL model over a deterministic model is questionable in the case of industrial scale HPW systems as presented here (as well as other similar systems) since the latter requires simpler models. 3. A FL model may be difficult to "sell" to an end-user as its results represent "approximate reasoning" a definition of which is, however, lacking. 4. Stochastic models may be applied with some relatively minor modifications on other systems, whereas FL models may not. For instance, the stochastic HPW system could be used to model municipal drinking water systems, whereas the FL HPW model should or could not be used on such systems. This is because the FL and stochastic model philosophies of a HPW system are fundamentally different. The stochastic model sees schedule and volume uncertainties as random phenomena described by statistical distributions based on either estimated or historical data. The FL model, on the other hand, simulates schedule uncertainties based on estimated operator behaviour e.g. tiredness of the operators and their working schedule. But in a municipal drinking water distribution system the notion of "operator" breaks down. 5. Stochastic methods can account for uncertainties that are difficult to model with FL. The FL HPW system model does not account for dispensed volume uncertainty, as there appears to be no reasonable method to account for it with FL whereas the stochastic model includes volume uncertainty.
Resumo:
The research work in this thesis reports rapid separation of biologically important low molecular weight compounds by microchip electrophoresis and ultrahigh liquid chromatography. Chapter 1 introduces the theory and principles behind capillary electrophoresis separation. An overview of the history, different modes and detection techniques coupled to CE is provided. The advantages of microchip electrophoresis are highlighted. Some aspects of metal complex analysis by capillary electrophoresis are described. Finally, the theory and different modes of the liquid chromatography technology are presented. Chapter 2 outlines the development of a method for the capillary electrophoresis of (R, S) Naproxen. Variable parameters of the separation were optimized (i.e. buffer concentration and pH, concentration of chiral selector additives, applied voltage and injection condition).The method was validated in terms of linearity, precision, and LOD. The optimized method was then transferred to a microchip electrophoresis system. Two different types of injection i.e. gated and pinched, were investigated. This microchip method represents the fastest reported chiral separation of Naproxen to date. Chapter 3 reports ultra-fast separation of aromatic amino acid by capillary electrophoresis using the short-end technique. Variable parameters of the separation were optimized and validated. The optimized method was then transferred to a microchip electrophoresis system where the separation time was further reduced. Chapter 4 outlines the use of microchip electrophoresis as an efficient tool for analysis of aluminium complexes. A 2.5 cm channel with linear imaging UV detection was used to separate and detect aluminium-dopamine complex and free dopamine. For the first time, a baseline, separation of aluminium dopamine was achieved on a 15 seconds timescale. Chapter 5 investigates a rapid, ultra-sensitive and highly efficient method for quantification of histamine in human psoriatic plaques using microdialysis and ultrahigh performance liquid chromatography with fluorescence detection. The method utilized a sub-two-micron packed C18 stationary phase. A fluorescent reagent, 4-(1-pyrene) butyric acid N-hydroxysuccinimide ester was conjugated to the primary and secondary amino moieties of histamine. The dipyrene-labeled histamine in human urine was also investigated by ultrahigh pressure liquid chromatography using a C18 column with 1.8 μm particle diameter. These methods represent one of the fastest reported separations to date of histamine using fluorescence detection.
Resumo:
We addressed four research questions, each relating to the training and assessment of the competencies associated with the performance of ultrasound-guided axillary brachial plexus blockade (USgABPB). These were: (i) What are the most important determinants of learning of USgABPB? (ii) What is USgABPB? What are the errors most likely to occur when trainees learn to perform this procedure? (iii) How should end-user input be applied to the development of a novel USgABPB simulator? (iv) Does structured simulation based training influence novice learning of the procedure positively? We demonstrated that the most important determinants of learning USgABPB are: (a) Access to a formal structured training programme. (b) Frequent exposure to clinical learning opportunity in an appropriate setting (c) A clinical learning opporunity requires an appropriate patient, trainee and teacher being present at the same time, in an appropriate environment. We carried out a comprehensive description of the procedure. We performed a formal task analysis of USgABPB, identifying (i) 256 specific tasks associated with the safe and effective performance of the procedure, and (ii) the 20 most critical errors likely to occur in this setting. We described a methodology for this and collected data based on detailed, sequential evaluation of prototypes by trainees in anaesthesia. We carried out a pilot randomised control trial assessing the effectiveness of a USgABPB simulator during its development. Our data did not enable us to draw a reliable conclusion to this question; the trail did provide important new learning (as a pilot) to inform future investigation of this question. We believe that the ultimate goal of designing effective simulation-based training and assessment of ultrasound-guided regional anaesthesia is closer to realisation as a result of this work. It remains to be proven if this approach will have a positive impact on procedural performance, and more importantly improve patient outcomes.
Resumo:
Future high speed communications networks will transmit data predominantly over optical fibres. As consumer and enterprise computing will remain the domain of electronics, the electro-optical conversion will get pushed further downstream towards the end user. Consequently, efficient tools are needed for this conversion and due to many potential advantages, including low cost and high output powers, long wavelength Vertical Cavity Surface Emitting Lasers (VCSELs) are a viable option. Drawbacks, such as broader linewidths than competing options, can be mitigated through the use of additional techniques such as Optical Injection Locking (OIL) which can require significant expertise and expensive equipment. This thesis addresses these issues by removing some of the experimental barriers to achieving performance increases via remote OIL. Firstly, numerical simulations of the phase and the photon and carrier numbers of an OIL semiconductor laser allowed the classification of the stable locking phase limits into three distinct groups. The frequency detuning of constant phase values (ø) was considered, in particular ø = 0 where the modulation response parameters were shown to be independent of the linewidth enhancement factor, α. A new method to estimate α and the coupling rate in a single experiment was formulated. Secondly, a novel technique to remotely determine the locked state of a VCSEL based on voltage variations of 2mV−30mV during detuned injection has been developed which can identify oscillatory and locked states. 2D & 3D maps of voltage, optical and electrical spectra illustrate corresponding behaviours. Finally, the use of directly modulated VCSELs as light sources for passive optical networks was investigated by successful transmission of data at 10 Gbit/s over 40km of single mode fibre (SMF) using cost effective electronic dispersion compensation to mitigate errors due to wavelength chirp. A widely tuneable MEMS-VCSEL was established as a good candidate for an externally modulated colourless source after a record error free transmission at 10 Gbit/s over 50km of SMF across a 30nm single mode tuning range. The ability to remotely set the emission wavelength using the novel methods developed in this thesis was demonstrated.
Resumo:
The mobile cloud computing paradigm can offer relevant and useful services to the users of smart mobile devices. Such public services already exist on the web and in cloud deployments, by implementing common web service standards. However, these services are described by mark-up languages, such as XML, that cannot be comprehended by non-specialists. Furthermore, the lack of common interfaces for related services makes discovery and consumption difficult for both users and software. The problem of service description, discovery, and consumption for the mobile cloud must be addressed to allow users to benefit from these services on mobile devices. This paper introduces our work on a mobile cloud service discovery solution, which is utilised by our mobile cloud middleware, Context Aware Mobile Cloud Services (CAMCS). The aim of our approach is to remove complex mark-up languages from the description and discovery process. By means of the Cloud Personal Assistant (CPA) assigned to each user of CAMCS, relevant mobile cloud services can be discovered and consumed easily by the end user from the mobile device. We present the discovery process, the architecture of our own service registry, and service description structure. CAMCS allows services to be used from the mobile device through a user's CPA, by means of user defined tasks. We present the task model of the CPA enabled by our solution, including automatic tasks, which can perform work for the user without an explicit request.
Resumo:
This paper describes a methodology for detecting anomalies from sequentially observed and potentially noisy data. The proposed approach consists of two main elements: 1) filtering, or assigning a belief or likelihood to each successive measurement based upon our ability to predict it from previous noisy observations and 2) hedging, or flagging potential anomalies by comparing the current belief against a time-varying and data-adaptive threshold. The threshold is adjusted based on the available feedback from an end user. Our algorithms, which combine universal prediction with recent work on online convex programming, do not require computing posterior distributions given all current observations and involve simple primal-dual parameter updates. At the heart of the proposed approach lie exponential-family models which can be used in a wide variety of contexts and applications, and which yield methods that achieve sublinear per-round regret against both static and slowly varying product distributions with marginals drawn from the same exponential family. Moreover, the regret against static distributions coincides with the minimax value of the corresponding online strongly convex game. We also prove bounds on the number of mistakes made during the hedging step relative to the best offline choice of the threshold with access to all estimated beliefs and feedback signals. We validate the theory on synthetic data drawn from a time-varying distribution over binary vectors of high dimensionality, as well as on the Enron email dataset. © 1963-2012 IEEE.
Resumo:
BACKGROUND: Scientists rarely reuse expert knowledge of phylogeny, in spite of years of effort to assemble a great "Tree of Life" (ToL). A notable exception involves the use of Phylomatic, which provides tools to generate custom phylogenies from a large, pre-computed, expert phylogeny of plant taxa. This suggests great potential for a more generalized system that, starting with a query consisting of a list of any known species, would rectify non-standard names, identify expert phylogenies containing the implicated taxa, prune away unneeded parts, and supply branch lengths and annotations, resulting in a custom phylogeny suited to the user's needs. Such a system could become a sustainable community resource if implemented as a distributed system of loosely coupled parts that interact through clearly defined interfaces. RESULTS: With the aim of building such a "phylotastic" system, the NESCent Hackathons, Interoperability, Phylogenies (HIP) working group recruited 2 dozen scientist-programmers to a weeklong programming hackathon in June 2012. During the hackathon (and a three-month follow-up period), 5 teams produced designs, implementations, documentation, presentations, and tests including: (1) a generalized scheme for integrating components; (2) proof-of-concept pruners and controllers; (3) a meta-API for taxonomic name resolution services; (4) a system for storing, finding, and retrieving phylogenies using semantic web technologies for data exchange, storage, and querying; (5) an innovative new service, DateLife.org, which synthesizes pre-computed, time-calibrated phylogenies to assign ages to nodes; and (6) demonstration projects. These outcomes are accessible via a public code repository (GitHub.com), a website (http://www.phylotastic.org), and a server image. CONCLUSIONS: Approximately 9 person-months of effort (centered on a software development hackathon) resulted in the design and implementation of proof-of-concept software for 4 core phylotastic components, 3 controllers, and 3 end-user demonstration tools. While these products have substantial limitations, they suggest considerable potential for a distributed system that makes phylogenetic knowledge readily accessible in computable form. Widespread use of phylotastic systems will create an electronic marketplace for sharing phylogenetic knowledge that will spur innovation in other areas of the ToL enterprise, such as annotation of sources and methods and third-party methods of quality assessment.
Resumo:
BACKGROUND: In recent years large bibliographic databases have made much of the published literature of biology available for searches. However, the capabilities of the search engines integrated into these databases for text-based bibliographic searches are limited. To enable searches that deliver the results expected by comparative anatomists, an underlying logical structure known as an ontology is required. DEVELOPMENT AND TESTING OF THE ONTOLOGY: Here we present the Mammalian Feeding Muscle Ontology (MFMO), a multi-species ontology focused on anatomical structures that participate in feeding and other oral/pharyngeal behaviors. A unique feature of the MFMO is that a simple, computable, definition of each muscle, which includes its attachments and innervation, is true across mammals. This construction mirrors the logical foundation of comparative anatomy and permits searches using language familiar to biologists. Further, it provides a template for muscles that will be useful in extending any anatomy ontology. The MFMO is developed to support the Feeding Experiments End-User Database Project (FEED, https://feedexp.org/), a publicly-available, online repository for physiological data collected from in vivo studies of feeding (e.g., mastication, biting, swallowing) in mammals. Currently the MFMO is integrated into FEED and also into two literature-specific implementations of Textpresso, a text-mining system that facilitates powerful searches of a corpus of scientific publications. We evaluate the MFMO by asking questions that test the ability of the ontology to return appropriate answers (competency questions). We compare the results of queries of the MFMO to results from similar searches in PubMed and Google Scholar. RESULTS AND SIGNIFICANCE: Our tests demonstrate that the MFMO is competent to answer queries formed in the common language of comparative anatomy, but PubMed and Google Scholar are not. Overall, our results show that by incorporating anatomical ontologies into searches, an expanded and anatomically comprehensive set of results can be obtained. The broader scientific and publishing communities should consider taking up the challenge of semantically enabled search capabilities.
Resumo:
User supplied knowledge and interaction is a vital component of a toolkit for producing high quality parallel implementations of scalar FORTRAN numerical code. In this paper we consider the necessary components that such a parallelisation toolkit should possess to provide an effective environment to identify, extract and embed user relevant user knowledge. We also examine to what extent these facilities are available in leading parallelisation tools; in particular we discuss how these issues have been addressed in the development of the user interface of the Computer Aided Parallelisation Tools (CAPTools). The CAPTools environment has been designed to enable user exploration, interaction and insertion of user knowledge to facilitate the automatic generation of very efficient parallel code. A key issue in the user's interaction is control of the volume of information so that the user is focused on only that which is needed. User control over the level and extent of information revealed at any phase is supplied using a wide variety of filters. Another issue is the way in which information is communicated. Dependence analysis and its resulting graphs involve a lot of sophisticated rather abstract concepts unlikely to be familiar to most users of parallelising tools. As such, considerable effort has been made to communicate with the user in terms that they will understand. These features, amongst others, and their use in the parallelisation process are described and their effectiveness discussed.
Resumo:
In 2000 a Review of Current Marine Observations in relation to present and future needs was undertaken by the Inter-Agency Committee for Marine Science and Technology (IACMST). The Marine Environmental Change Network (MECN) was initiated in 2002 as a direct response to the recommendations of the report. A key part of the current phase of the MECN is to ensure that information from the network is provided to policy makers and other end-users to enable them to produce more accurate assessments of ecosystem state and gain a clearer understanding of factors influencing change in marine ecosystems. The MECN holds workshops on an annual basis, bringing together partners maintaining time-series and long-term datasets as well as end-users interested in outputs from the network. It was decided that the first workshop of the MECN continuation phase should consist of an evaluation of the time series and data sets maintained by partners in the MECN with regard to their ‘fit for purpose’ for answering key science questions and informing policy development. This report is based on the outcomes of the workshop. Section one of the report contains a brief introduction to monitoring, time series and long-term datasets. The various terms are defined and the need for MECN type data to complement compliance monitoring programmes is discussed. Outlines are also given of initiatives such as the United Kingdom Marine Monitoring and Assessment Strategy (UKMMAS) and Oceans 2025. Section two contains detailed information for each of the MECN time series / long-term datasets including information on scientific outputs and current objectives. This information is mainly based on the presentations given at the workshop and therefore follows a format whereby the following headings are addressed: Origin of time series including original objectives; current objectives; policy relevance; products (advice, publications, science and society). Section three consists of comments made by the review panel concerning all the time series and the network. Needs or issues highlighted by the panel with regard to the future of long-term datasets and time-series in the UK are shown along with advice and potential solutions where offered. The recommendations are divided into 4 categories; ‘The MECN and end-user requirements’; ‘Procedures & protocols’; ‘Securing data series’ and ‘Future developments’. Ever since marine environmental protection issues really came to the fore in the 1960s, it has been recognised that there is a requirement for a suitable evidence base on environmental change in order to support policy and management for UK waters. Section four gives a brief summary of the development of marine policy in the UK along with comments on the availability and necessity of long-term marine observations for the implementation of this policy. Policy relating to three main areas is discussed; Marine Conservation (protecting biodiversity and marine ecosystems); Marine Pollution and Fisheries. The conclusion of this section is that there has always been a specific requirement for information on long-term change in marine ecosystems around the UK in order to address concerns over pollution, fishing and general conservation. It is now imperative that this need is addressed in order for the UK to be able to fulfil its policy commitments and manage marine ecosystems in the light of climate change and other factors.
Resumo:
Received for publication October 31, 2002. Design and operation of Fe0 permeable reactive barriers (PRBs) can be improved by understanding the long-term mineralogical transformations that occur within PRBs. Changes in mineral precipitates, cementation, and corrosion of Fe0 filings within an in situ pilot-scale PRB were examined after the first 30 months of operation and compared with results of a previous study of the PRB conducted 15 months earlier using X-ray diffraction and scanning electron microscopy employing energy dispersive X-ray and backscatter electron analyses. Iron (oxy)hydroxides, aragonite, and maghemite and/or magnetite occurred throughout the cores collected 30 mo after installation. Goethite, lepidocrocite, mackinawite, aragonite, calcite, and siderite were associated with oxidized and cemented areas, while green rusts were detected in more reduced zones. Basic differences from our last detailed investigation include (i) mackinawite crystallized from amorphous FeS, (ii) aragonite transformed into calcite, (iii) akaganeite transformed to goethite and lepidocrocite, (iv) iron (oxy)hydroxides and calcium and iron carbonate minerals increased, (v) cementation was greater in the more recent study, and (vi) oxidation, corrosion, and disintegration of Fe0 filings were greater, especially in cemented areas, in the more recent study. If the degree of corrosion and cementation that was observed from 15 to 30 mo after installation continues, certain portions of the PRB (i.e., up-gradient entrance of the ground water to the Fe0 section of the PRB) may last less than five more years, thus reducing the effectiveness of the PRB to mitigate contaminants. Abbreviations: EDX, energy dispersive X-ray • Fe0, zerovalent iron • PRB, permeable reactive barrier • SEM, scanning electron microscopy • XRD, X-ray diffraction
Resumo:
The free-base form of tetra-tert-butyl porphine (TtBP), which has extremely bulky meso substituents, is severely distorted from planarity, with a ruffling angle of 65.5degrees. The resonance Raman spectrum of TtBP (lambda(ex) = 457.9 nm) and its d(2), d(8), and d(10) isotopomers have been recorded, and while the spectra show high-frequency bands similar to those observed for planar meso-substituted porphyrins, there are several additional intense bands in the low-frequency region. Density functional calculations at the B3-LYP/6-31G(d) level were carried out for all four isotopomers, and calculated frequencies were scaled using a single factor of 0.98. The single factor scaling approach was validated on free base porphine where the RMS error was found to be 14.9 cm(-1). All the assigned bands in the high-frequency (> 1000 cm(-1)) region of TtBP were found to be due to vibrations similar in character to the in-plane skeletal modes of conventional planar porphyrins. In the low-frequency region, two of the bands, assigned as nu(8) (ca. 330 cm(-1)) and nu(16) (ca. 540 cm(-1)), are also found in planar porphyrins such as tetra-phenyl porphine (TPP) and tetra-iso-propyl porphine (IPP). Of the remaining three very strong bands, the lowest frequency band was assigned as gamma(12) (pyr swivel, obsd 415 cm(-1), calcd 407 cm(-1) in do). The next band, observed at 589 cm-1 in the do compound (calcd 583 cm(-1)), was assigned as a mode whose composition is a mixture of modes that were previously labeled gamma(13) (gamma(CmCaHmCa)) andy gamma(11) (pyr fold(asym)) in NiOEP. The final strong band, observed at 744 cm(-1) (calcd 746 cm(-1)), was assigned to a mode whose composition is again a mixture of gamma(11) and gamma(13), although here it is gamma(11) rather than gamma(13) which predominates. These bands have characters and positions similar to those of three of the four porphyrin ring-based, weak bands that have previously been observed for NiTPP. In addition there are several weaker bands in the TtBP spectra that are also
Resumo:
Polymer nanocomposites offer the potential of enhanced properties such as increased modulus and barrier properties to the end user. Much work has been carried out on the effects of extrusion conditions on melt processed nanocomposites but very little research has been conducted on the use of polymer nanocomposites in semi-solid forming processes such as thermoforming and injection blow molding. These processes are used to make much of today’s packaging, and any improvements in performance such as possible lightweighting due to increased modulus would bring signi?cant bene?ts both economically and environmentally. The work described here looks at the biaxial deformation of polypropylene–clay nanocomposites under industrial forming conditions in order to determine if the presence of clay affects processability, structure and mechanical properties of the stretched material. Melt compounded polypropylene/clay composites in sheet form were biaxially stretched at a variety of processing conditions to examine the effect of high temperature, high strain and high strain rate processing on sheet structure
and properties.
A biaxial test rig was used to carry out the testing which imposed conditions on the sheet that are representative of those applied in injection blow molding and thermoforming. Results show that the presence of clay increases the yield stress relative to the un?lled material at typical processing temperatures and that the sensitivity of the yield stress to temperature is greater for the ?lled material. The stretching process is found to have a signi?cant effect on the delamination and alignment of clay particles (as observed by TEM) and on yield stress and elongation at break of the stretched sheet.
Resumo:
New elements associated withWeb 2.0 relating to interactivity and end-user focus have combined with the availability of newlevels of information to encourage the development of what may be termed a Gov 2.0 approach.This, in combination with recent initiatives in the modernising government programme, has emphasised new levels of public participation and engagement with government as well as a re-engineering of public services tomake them more responsive to their end users. Adopting a governmentality perspective, it is argued that this involves a wider process of governing through constructing and reconstructing ideas of the public, community and individual citizen-consumers who take on a role in their own governance. It is argued that this fundamental re-working of the nature of what is public represents a constitutional change that is perhaps more signi¢cant than the constitutional reform programme directed to formal government which attracts more attention
Resumo:
Mode-mixing of coherent excitations of a trapped Bose-Einstein condensate is modeled using the Bogoliubov approximation. Calculations are presented for second-harmonic generation between the two lowest-lying even-parity m=0 modes in an oblate spheroidal trap. Hybridization of the modes of the breather (l=0) and surface (l=4) states leads to the formation of a Bogoliubov dark state near phase-matching resonance so that a single mode is coherently populated. Efficient harmonic generation requires a strong coupling rate, sharply-defined and well-separated frequency spectrum, and good phase matching. We find that in all three respects the quantal results are significantly different from hydrodynamic predictions. Typically the second-harmonic conversion rate is half that given by an equivalent hydrodynamic estimate.