236 resultados para Precision Xtra®
Resumo:
Precise clock synchronization is essential in emerging time-critical distributed control systems operating over computer networks where the clock synchronization requirements are mostly focused on relative clock synchronization and high synchronization precision. Existing clock synchronization techniques such as the Network Time Protocol (NTP) and the IEEE 1588 standard can be difficult to apply to such systems because of the highly precise hardware clocks required, due to network congestion caused by a high frequency of synchronization message transmissions, and high overheads. In response, we present a Time Stamp Counter based precise Relative Clock Synchronization Protocol (TSC-RCSP) for distributed control applications operating over local-area networks (LANs). In our protocol a software clock based on the TSC register, counting CPU cycles, is adopted in the time clients and server. TSC-based clocks offer clients a precise, stable and low-cost clock synchronization solution. Experimental results show that clock precision in the order of 10~microseconds can be achieved in small-scale LAN systems. Such clock precision is much higher than that of a processor's Time-Of-Day clock, and is easily sufficient for most distributed real-time control applications over LANs.
Resumo:
This review focuses on one of the fundamental phenomena that occur upon application of sufficiently strong electric fields to gases, namely the formation and propagation of ionization waves-streamers. The dynamics of streamers is controlled by strongly nonlinear coupling, in localized streamer tip regions, between enhanced (due to charge separation) electric field and ionization and transport of charged species in the enhanced field. Streamers appear in nature (as initial stages of sparks and lightning, as huge structures-sprites above thunderclouds), and are also found in numerous technological applications of electrical discharges. Here we discuss the fundamental physics of the guided streamer-like structures-plasma bullets which are produced in cold atmospheric-pressure plasma jets. Plasma bullets are guided ionization waves moving in a thin column of a jet of plasma forming gases (e.g.,He or Ar) expanding into ambient air. In contrast to streamers in a free (unbounded) space that propagate in a stochastic manner and often branch, guided ionization waves are repetitive and highly-reproducible and propagate along the same path-the jet axis. This property of guided streamers, in comparison with streamers in a free space, enables many advanced time-resolved experimental studies of ionization waves with nanosecond precision. In particular, experimental studies on manipulation of streamers by external electric fields and streamer interactions are critically examined. This review also introduces the basic theories and recent advances on the experimental and computational studies of guided streamers, in particular related to the propagation dynamics of ionization waves and the various parameters of relevance to plasma streamers. This knowledge is very useful to optimize the efficacy of applications of plasma streamer discharges in various fields ranging from health care and medicine to materials science and nanotechnology.
Resumo:
Plasma plumes with exotically segmented channel structure and plasma bullet propagation are produced in atmospheric plasma jets. This is achieved by tailoring interruptions of a continuous DC power supply over the time scales of lifetimes of residual electrons produced by the preceding discharge phase. These phenomena are explained by studying the plasma dynamics using nanosecond-precision imaging. One of the plumes is produced using 2-10μs interruptions in the 8kV DC voltage and features a still bright channel from which a propagating bullet detaches. A shorter interruption of 900ns produces a plume with the additional long conducting dark channel between the jet nozzle and the bright area. The bullet size, formation dynamics, and propagation speed and distance can be effectively controlled. This may lead to micrometer-and nanosecond-precision delivery of quantized plasma bits, warranted for next-generation health, materials, and device technologies.
Resumo:
This article introduces a deterministic approach to using low-temperature, thermally non-equilibrium plasmas to synthesize delicate low-dimensional nanostructures of a small number of atoms on plasma exposed surfaces. This approach is based on a set of plasma-related strategies to control elementary surface processes, an area traditionally covered by surface science. Major issues related to balanced delivery and consumption of building units, appropriate choice of process conditions, and account of plasma-related electric fields, electric charges and polarization effects are identified and discussed in the quantum dot nanoarray context. Examples of a suitable plasma-aided nanofabrication facility and specific effects of a plasma-based environment on self-organized growth of size- and position-uniform nanodot arrays are shown. These results suggest a very positive outlook for using low-temperature plasma-based nanotools in high-precision nanofabrication of self-assembled nanostructures and elements of nanodevices, one of the areas of continuously rising demand from academia and industry.
Resumo:
The results of numerical simulations of nanometer precision distributions of microscopic ion fluxes in ion-assisted etching of nanoscale features on the surfaces of dielectric materials using a self-assembled monolayer of spherical nanoparticles as a mask are presented. It is shown that the ion fluxes to the substrate and nanosphere surfaces can be effectively controlled by the plasma parameters and the external bias applied to the substrate. By proper adjustment of these parameters, the ion flux can be focused onto the areas uncovered by the nanospheres. Under certain conditions, the ion flux distributions feature sophisticated hexagonal patterns, which may lead to very different nanofeature etching profiles. The results presented are generic and suggest viable ways to overcome some of the limitations of the existing plasma-assisted nanolithography.
Resumo:
Since 1995 the eruption of the andesitic Soufrière Hills Volcano (SHV), Montserrat, has been studied in substantial detail. As an important contribution to this effort, the Seismic Experiment with Airgunsource-Caribbean Andesitic Lava Island Precision Seismo-geodetic Observatory (SEA-CALIPSO) experiment was devised to image the arc crust underlying Montserrat, and, if possible, the magma system at SHV using tomography and reflection seismology. Field operations were carried out in October–December 2007, with deployment of 238 seismometers on land supplementing seven volcano observatory stations, and with an array of 10 ocean-bottom seismometers deployed offshore. The RRS James Cook on NERC cruise JC19 towed a tuned airgun array plus a digital 48-channel streamer on encircling and radial tracks for 77 h about Montserrat during December 2007, firing 4414 airgun shots and yielding about 47 Gb of data. The main objecctives of the experiment were achieved. Preliminary analyses of these data published in 2010 generated images of heterogeneous high-velocity bodies representing the cores of volcanoes and subjacent intrusions, and shallow areas of low velocity on the flanks of the island that reflect volcaniclastic deposits and hydrothermal alteration. The resolution of this preliminary work did not extend beyond 5 km depth. An improved three-dimensional (3D) seismic velocity model was then obtained by inversion of 181 665 first-arrival travel times from a more-complete sampling of the dataset, yielding clear images to 7.5 km depth of a low-velocity volume that was interpreted as the magma chamber which feeds the current eruption, with an estimated volume 13 km3. Coupled thermal and seismic modelling revealed properties of the partly crystallized magma. Seismic reflection analyses aimed at imaging structures under southern Montserrat had limited success, and suggest subhorizontal layering interpreted as sills at a depth of between 6 and 19 km. Seismic reflection profiles collected offshore reveal deep fans of volcaniclastic debris and fault offsets, leading to new tectonic interpretations. This chapter presents the project goals and planning concepts, describes in detail the campaigns at sea and on land, summarizes the major results, and identifies the key lessons learned.
Resumo:
Database watermarking has received significant research attention in the current decade. Although, almost all watermarking models have been either irreversible (the original relation cannot be restored from the watermarked relation) and/or non-blind (requiring original relation to detect the watermark in watermarked relation). This model has several disadvantages over reversible and blind watermarking (requiring only watermarked relation and secret key from which the watermark is detected and original relation is restored) including inability to identify rightful owner in case of successful secondary watermarking, inability to revert the relation to original data set (required in high precision industries) and requirement to store unmarked relation at a secure secondary storage. To overcome these problems, we propose a watermarking scheme that is reversible as well as blind. We utilize difference expansion on integers to achieve reversibility. The major advantages provided by our scheme are reversibility to high quality original data set, rightful owner identification, resistance against secondary watermarking attacks, and no need to store original database at a secure secondary storage.
Resumo:
Monitoring gases for environmental, industrial and agricultural fields is a demanding task that requires long periods of observation, large quantity of sensors, data management, high temporal and spatial resolution, long term stability, recalibration procedures, computational resources, and energy availability. Wireless Sensor Networks (WSNs) and Unmanned Aerial Vehicles (UAVs) are currently representing the best alternative to monitor large, remote, and difficult access areas, as these technologies have the possibility of carrying specialised gas sensing systems, and offer the possibility of geo-located and time stamp samples. However, these technologies are not fully functional for scientific and commercial applications as their development and availability is limited by a number of factors: the cost of sensors required to cover large areas, their stability over long periods, their power consumption, and the weight of the system to be used on small UAVs. Energy availability is a serious challenge when WSN are deployed in remote areas with difficult access to the grid, while small UAVs are limited by the energy in their reservoir tank or batteries. Another important challenge is the management of data produced by the sensor nodes, requiring large amount of resources to be stored, analysed and displayed after long periods of operation. In response to these challenges, this research proposes the following solutions aiming to improve the availability and development of these technologies for gas sensing monitoring: first, the integration of WSNs and UAVs for environmental gas sensing in order to monitor large volumes at ground and aerial levels with a minimum of sensor nodes for an effective 3D monitoring; second, the use of solar energy as a main power source to allow continuous monitoring; and lastly, the creation of a data management platform to store, analyse and share the information with operators and external users. The principal outcomes of this research are the creation of a gas sensing system suitable for monitoring any kind of gas, which has been installed and tested on CH4 and CO2 in a sensor network (WSN) and on a UAV. The use of the same gas sensing system in a WSN and a UAV reduces significantly the complexity and cost of the application as it allows: a) the standardisation of the signal acquisition and data processing, thereby reducing the required computational resources; b) the standardisation of calibration and operational procedures, reducing systematic errors and complexity; c) the reduction of the weight and energy consumption, leading to an improved power management and weight balance in the case of UAVs; d) the simplification of the sensor node architecture, which is easily replicated in all the nodes. I evaluated two different sensor modules by laboratory, bench, and field tests: a non-dispersive infrared module (NDIR) and a metal-oxide resistive nano-sensor module (MOX nano-sensor). The tests revealed advantages and disadvantages of the two modules when used for static nodes at the ground level and mobile nodes on-board a UAV. Commercial NDIR modules for CO2 have been successfully tested and evaluated in the WSN and on board of the UAV. Their advantage is the precision and stability, but their application is limited to a few gases. The advantages of the MOX nano-sensors are the small size, low weight, low power consumption and their sensitivity to a broad range of gases. However, selectivity is still a concern that needs to be addressed with further studies. An electronic board to interface sensors in a large range of resistivity was successfully designed, created and adapted to operate on ground nodes and on-board UAV. The WSN and UAV created were powered with solar energy in order to facilitate outdoor deployment, data collection and continuous monitoring over large and remote volumes. The gas sensing, solar power, transmission and data management systems of the WSN and UAV were fully evaluated by laboratory, bench and field testing. The methodology created to design, developed, integrate and test these systems was extensively described and experimentally validated. The sampling and transmission capabilities of the WSN and UAV were successfully tested in an emulated mission involving the detection and measurement of CO2 concentrations in a field coming from a contaminant source; the data collected during the mission was transmitted in real time to a central node for data analysis and 3D mapping of the target gas. The major outcome of this research is the accomplishment of the first flight mission, never reported before in the literature, of a solar powered UAV equipped with a CO2 sensing system in conjunction with a network of ground sensor nodes for an effective 3D monitoring of the target gas. A data management platform was created using an external internet server, which manages, stores, and shares the data collected in two web pages, showing statistics and static graph images for internal and external users as requested. The system was bench tested with real data produced by the sensor nodes and the architecture of the platform was widely described and illustrated in order to provide guidance and support on how to replicate the system. In conclusion, the overall results of the project provide guidance on how to create a gas sensing system integrating WSNs and UAVs, how to power the system with solar energy and manage the data produced by the sensor nodes. This system can be used in a wide range of outdoor applications, especially in agriculture, bushfires, mining studies, zoology, and botanical studies opening the way to an ubiquitous low cost environmental monitoring, which may help to decrease our carbon footprint and to improve the health of the planet.
Resumo:
Background: Malaria rapid diagnostic tests (RDTs) are increasingly used by remote health personnel with minimal training in laboratory techniques. RDTs must, therefore, be as simple, safe and reliable as possible. Transfer of blood from the patient to the RDT is critical to safety and accuracy, and poses a significant challenge to many users. Blood transfer devices were evaluated for accuracy and precision of volume transferred, safety and ease of use, to identify the most appropriate devices for use with RDTs in routine clinical care. Methods: Five devices, a loop, straw-pipette, calibrated pipette, glass capillary tube, and a new inverted cup device, were evaluated in Nigeria, the Philippines and Uganda. The 227 participating health workers used each device to transfer blood from a simulated finger-prick site to filter paper. For each transfer, the number of attempts required to collect and deposit blood and any spilling of blood during transfer were recorded. Perceptions of ease of use and safety of each device were recorded for each participant. Blood volume transferred was calculated from the area of blood spots deposited on filter paper. Results: The overall mean volumes transferred by devices differed significantly from the target volume of 5 microliters (p < 0.001). The inverted cup (4.6 microliters) most closely approximated the target volume. The glass capillary was excluded from volume analysis as the estimation method used is not compatible with this device. The calibrated pipette accounted for the largest proportion of blood exposures (23/225, 10%); exposures ranged from 2% to 6% for the other four devices. The inverted cup was considered easiest to use in blood collection (206/ 226, 91%); the straw-pipette and calibrated pipette were rated lowest (143/225 [64%] and 135/225 [60%] respectively). Overall, the inverted cup was the most preferred device (72%, 163/227), followed by the loop (61%, 138/227). Conclusions: The performance of blood transfer devices varied in this evaluation of accuracy, blood safety, ease of use, and user preference. The inverted cup design achieved the highest overall performance, while the loop also performed well. These findings have relevance for any point-of-care diagnostics that require blood sampling.
Resumo:
This paper reports on the 2nd ShARe/CLEFeHealth evaluation lab which continues our evaluation resource building activities for the medical domain. In this lab we focus on patients' information needs as opposed to the more common campaign focus of the specialised information needs of physicians and other healthcare workers. The usage scenario of the lab is to ease patients and next-of-kins' ease in understanding eHealth information, in particular clinical reports. The 1st ShARe/CLEFeHealth evaluation lab was held in 2013. This lab consisted of three tasks. Task 1 focused on named entity recognition and normalization of disorders; Task 2 on normalization of acronyms/abbreviations; and Task 3 on information retrieval to address questions patients may have when reading clinical reports. This year's lab introduces a new challenge in Task 1 on visual-interactive search and exploration of eHealth data. Its aim is to help patients (or their next-of-kin) in readability issues related to their hospital discharge documents and related information search on the Internet. Task 2 then continues the information extraction work of the 2013 lab, specifically focusing on disorder attribute identification and normalization from clinical text. Finally, this year's Task 3 further extends the 2013 information retrieval task, by cleaning the 2013 document collection and introducing a new query generation method and multilingual queries. De-identified clinical reports used by the three tasks were from US intensive care and originated from the MIMIC II database. Other text documents for Tasks 1 and 3 were from the Internet and originated from the Khresmoi project. Task 2 annotations originated from the ShARe annotations. For Tasks 1 and 3, new annotations, queries, and relevance assessments were created. 50, 79, and 91 people registered their interest in Tasks 1, 2, and 3, respectively. 24 unique teams participated with 1, 10, and 14 teams in Tasks 1, 2 and 3, respectively. The teams were from Africa, Asia, Canada, Europe, and North America. The Task 1 submission, reviewed by 5 expert peers, related to the task evaluation category of Effective use of interaction and targeted the needs of both expert and novice users. The best system had an Accuracy of 0.868 in Task 2a, an F1-score of 0.576 in Task 2b, and Precision at 10 (P@10) of 0.756 in Task 3. The results demonstrate the substantial community interest and capabilities of these systems in making clinical reports easier to understand for patients. The organisers have made data and tools available for future research and development.
Resumo:
Corporate social responsibility is imperative for manufacturing companies to achieve sustainable development. Under a strong environmental information disclosure system, polluting companies are disadvantaged in terms of market competitiveness, because they lack an environmentally friendly image. The objective of this study is to analyze productive inefficiency change in relation to toxic chemical substance emissions for the United States and Japan and their corresponding policies. We apply the weighted Russell directional distance model to measure companies productive inefficiency, which represents their production technology. The data encompass 330 US manufacturing firms observed from 1999 to 2007, and 466 Japanese manufacturing firms observed from 2001 to 2008. The article focuses on nine high-pollution industries (rubber and plastics; chemicals and allied products; paper and pulp; steel and non-ferrous metal; fabricated metal; industrial machinery; electrical products; transportation equipment; precision instruments) categorized into two industry groups: basic materials industries and processing and assembly industries. The results show that productive inefficiency decreased in all industrial sectors in the United States and Japan from 2001 to 2007. In particular, that of the electrical products industry decreased rapidly after 2002 for both countries, possibly because of the enforcement of strict environmental regulations for electrical products exported to European markets.
Resumo:
Aerial applications of granular insecticides are preferable because they can effectively penetrate vegetation, there is less drift, and no loss of product due to evaporation. We aimed to 1) assess the field efficacy ofVectoBac G to control Aedes vigilax (Skuse) in saltmarsh pools, 2) develop a stochastic-modeling procedure to monitor application quality, and 3) assess the distribution of VectoBac G after an aerial application. Because ground-based studies with Ae. vigilax immatures found that VectoBac G provided effective control below the recommended label rate of 7 kg/ha, we trialed a nominated aerial rate of 5 kg/ha as a case study. Our distribution pattern modeling method indicated that the variability in the number of VectoBac G particles captured in catch-trays was greater than expected for 5 kg/ha and that the widely accepted contour mapping approach to visualize the deposition pattern provided spurious results and therefore was not statistically appropriate. Based on the results of distribution pattern modeling, we calculated the catch tray size required to analyze the distribution of aerially applied granular formulations. The minimum catch tray size for products with large granules was 4 m2 for Altosid pellets and 2 m2 for VectoBac G. In contrast, the minimum catch-tray size for Altosid XRG, Aquabac G, and Altosand, with smaller granule sizes, was 1 m2. Little gain in precision would be made by increasing the catch-tray size further, when the increased workload and infrastructure is considered. Our improved methods for monitoring the distribution pattern of aerially applied granular insecticides can be adapted for use by both public health and agricultural contractors.
Resumo:
This paper is about localising across extreme lighting and weather conditions. We depart from the traditional point-feature-based approach as matching under dramatic appearance changes is a brittle and hard thing. Point feature detectors are fixed and rigid procedures which pass over an image examining small, low-level structure such as corners or blobs. They apply the same criteria applied all images of all places. This paper takes a contrary view and asks what is possible if instead we learn a bespoke detector for every place. Our localisation task then turns into curating a large bank of spatially indexed detectors and we show that this yields vastly superior performance in terms of robustness in exchange for a reduced but tolerable metric precision. We present an unsupervised system that produces broad-region detectors for distinctive visual elements, called scene signatures, which can be associated across almost all appearance changes. We show, using 21km of data collected over a period of 3 months, that our system is capable of producing metric localisation estimates from night-to-day or summer-to-winter conditions.
Resumo:
We present a machine learning model that predicts a structural disruption score from a protein s primary structure. SCHEMA was introduced by Frances Arnold and colleagues as a method for determining putative recombination sites of a protein on the basis of the full (PDB) description of its structure. The present method provides an alternative to SCHEMA that is able to determine the same score from sequence data only. Circumventing the need for resolving the full structure enables the exploration of yet unresolved and even hypothetical sequences for protein design efforts. Deriving the SCHEMA score from a primary structure is achieved using a two step approach: first predicting a secondary structure from the sequence and then predicting the SCHEMA score from the predicted secondary structure. The correlation coefficient for the prediction is 0.88 and indicates the feasibility of replacing SCHEMA with little loss of precision.