913 resultados para Drilling process monitoring
Resumo:
BACKGROUND: Age-related macular degeneration is the most common cause of sight impairment in the UK. In neovascular age-related macular degeneration (nAMD), vision worsens rapidly (over weeks) due to abnormal blood vessels developing that leak fluid and blood at the macula.
OBJECTIVES: To determine the optimal role of optical coherence tomography (OCT) in diagnosing people newly presenting with suspected nAMD and monitoring those previously diagnosed with the disease.
DATA SOURCES: Databases searched: MEDLINE (1946 to March 2013), MEDLINE In-Process & Other Non-Indexed Citations (March 2013), EMBASE (1988 to March 2013), Biosciences Information Service (1995 to March 2013), Science Citation Index (1995 to March 2013), The Cochrane Library (Issue 2 2013), Database of Abstracts of Reviews of Effects (inception to March 2013), Medion (inception to March 2013), Health Technology Assessment database (inception to March 2013).
REVIEW METHODS: Types of studies: direct/indirect studies reporting diagnostic outcomes.
INDEX TEST: time domain optical coherence tomography (TD-OCT) or spectral domain optical coherence tomography (SD-OCT).
COMPARATORS: clinical evaluation, visual acuity, Amsler grid, colour fundus photographs, infrared reflectance, red-free images/blue reflectance, fundus autofluorescence imaging, indocyanine green angiography, preferential hyperacuity perimetry, microperimetry. Reference standard: fundus fluorescein angiography (FFA). Risk of bias was assessed using quality assessment of diagnostic accuracy studies, version 2. Meta-analysis models were fitted using hierarchical summary receiver operating characteristic curves. A Markov model was developed (65-year-old cohort, nAMD prevalence 70%), with nine strategies for diagnosis and/or monitoring, and cost-utility analysis conducted. NHS and Personal Social Services perspective was adopted. Costs (2011/12 prices) and quality-adjusted life-years (QALYs) were discounted (3.5%). Deterministic and probabilistic sensitivity analyses were performed.
RESULTS: In pooled estimates of diagnostic studies (all TD-OCT), sensitivity and specificity [95% confidence interval (CI)] was 88% (46% to 98%) and 78% (64% to 88%) respectively. For monitoring, the pooled sensitivity and specificity (95% CI) was 85% (72% to 93%) and 48% (30% to 67%) respectively. The FFA for diagnosis and nurse-technician-led monitoring strategy had the lowest cost (£39,769; QALYs 10.473) and dominated all others except FFA for diagnosis and ophthalmologist-led monitoring (£44,649; QALYs 10.575; incremental cost-effectiveness ratio £47,768). The least costly strategy had a 46.4% probability of being cost-effective at £30,000 willingness-to-pay threshold.
LIMITATIONS: Very few studies provided sufficient information for inclusion in meta-analyses. Only a few studies reported other tests; for some tests no studies were identified. The modelling was hampered by a lack of data on the diagnostic accuracy of strategies involving several tests.
CONCLUSIONS: Based on a small body of evidence of variable quality, OCT had high sensitivity and moderate specificity for diagnosis, and relatively high sensitivity but low specificity for monitoring. Strategies involving OCT alone for diagnosis and/or monitoring were unlikely to be cost-effective. Further research is required on (i) the performance of SD-OCT compared with FFA, especially for monitoring but also for diagnosis; (ii) the performance of strategies involving combinations/sequences of tests, for diagnosis and monitoring; (iii) the likelihood of active and inactive nAMD becoming inactive or active respectively; and (iv) assessment of treatment-associated utility weights (e.g. decrements), through a preference-based study.
STUDY REGISTRATION: This study is registered as PROSPERO CRD42012001930.
FUNDING: The National Institute for Health Research Health Technology Assessment programme.
Resumo:
The use of Raman and anti-stokes Raman spectroscopy to investigate the effect of exposure to high power laser radiation on the crystalline phases of TiO2 has been investigated. Measurement of the changes, over several time integrals, in the Raman and anti-stokes Raman of TiO2 spectra with exposure to laser radiation is reported. Raman and anti-stokes Raman provide detail on both the structure and the kinetic process of changes in crystalline phases in the titania material. The effect of laser exposure resulted in the generation of increasing amounts of the rutile crystalline phase from the anatase crystalline phase during exposure. The Raman spectra displayed bands at 144 cm-1 (A1g), 197 cm-1 (Eg), 398 cm-1 (B1g), 515 cm-1 (A1g), and 640 cm-1 (Eg) assigned to anatase which were replaced by bands at 143 cm-1 (B1g), 235 cm-1 (2 phonon process), 448 cm-1 (Eg) and 612 cm-1 (A1g) which were assigned to rutile. This indicated that laser irradiation of TiO2 changes the crystalline phase from anatase to rutile. Raman and anti-stokes Raman are highly sensitive to the crystalline forms of TiO2 and allow characterisation of the effect of laser irradiation upon TiO2. This technique would also be applicable as an in situ method for monitoring changes during the laser irradiation process
Resumo:
New environmentally acceptable production methods are required to help reduce the environmental impact of many industrial processes. One potential route is the application of photocatalysis using semiconductors. This technique has enabled new environmentally acceptable synthetic routes for organic synthesis which do not require the use of toxic metals as redox reagents. These photocatalysts also have more favourable redox potentials than many traditional reagents. Semiconductor photocatalysis can also be applied to the treatment of polluted effluent or for the destruction of undesirable by-products of reactions. In addition to the clean nature of the process the power requirements of the technique can be relatively low, with some reactions requiring only sunlight.
Resumo:
A novel model-based principal component analysis (PCA) method is proposed in this paper for wide-area power system monitoring, aiming to tackle one of the critical drawbacks of the conventional PCA, i.e. the incapability to handle non-Gaussian distributed variables. It is a significant extension of the original PCA method which has already shown to outperform traditional methods like rate-of-change-of-frequency (ROCOF). The ROCOF method is quick for processing local information, but its threshold is difficult to determine and nuisance tripping may easily occur. The proposed model-based PCA method uses a radial basis function neural network (RBFNN) model to handle the nonlinearity in the data set to solve the no-Gaussian issue, before the PCA method is used for islanding detection. To build an effective RBFNN model, this paper first uses a fast input selection method to remove insignificant neural inputs. Next, a heuristic optimization technique namely Teaching-Learning-Based-Optimization (TLBO) is adopted to tune the nonlinear parameters in the RBF neurons to build the optimized model. The novel RBFNN based PCA monitoring scheme is then employed for wide-area monitoring using the residuals between the model outputs and the real PMU measurements. Experimental results confirm the efficiency and effectiveness of the proposed method in monitoring a suite of process variables with different distribution characteristics, showing that the proposed RBFNN PCA method is a reliable scheme as an effective extension to the linear PCA method.
Resumo:
Extrusion is one of the major methods for processing polymeric materials and the thermal homogeneity of the process output is a major concern for manufacture of high quality extruded products. Therefore, accurate process thermal monitoring and control are important for product quality control. However, most industrial extruders use single point thermocouples for the temperature monitoring/control although their measurements are highly affected by the barrel metal wall temperature. Currently, no industrially established thermal profile measurement technique is available. Furthermore, it has been shown that the melt temperature changes considerably with the die radial position and hence point/bulk measurements are not sufficient for monitoring and control of the temperature across the melt flow. The majority of process thermal control methods are based on linear models which are not capable of dealing with process nonlinearities. In this work, the die melt temperature profile of a single screw extruder was monitored by a thermocouple mesh technique. The data obtained was used to develop a novel approach of modelling the extruder die melt temperature profile under dynamic conditions (i.e. for predicting the die melt temperature profile in real-time). These newly proposed models were in good agreement with the measured unseen data. They were then used to explore the effects of process settings, material and screw geometry on the die melt temperature profile. The results showed that the process thermal homogeneity was affected in a complex manner by changing the process settings, screw geometry and material.
Resumo:
In this work, the impact of conventional drilling and helical milling processes on the fatigue response Ti-6Al-4V (grade 5 titanium alloy) has been presented. Results show that the work pieces produced by helical milling has a 119% longer fatigue life compared with the drilled pieces under dry machining condition, and a 96% longer fatigue life for helical milled piece under lubricated condition. The use of cutting fluid has led to longer fatigue lives – 15% longer for drilling and 3% longer for helical milling. Other results such as the machined surface roughness, alloy surface and sub-surface microstructures have also been studied in details.
Resumo:
Highway structures such as bridges are subject to continuous degradation primarily due to ageing, loading and environmental factors. A rational transport policy must monitor and provide adequate maintenance to this infrastructure to guarantee the required levels of transport service and safety. Increasingly in recent years, bridges are being instrumented and monitored on an ongoing basis due to the implementation of Bridge Management Systems. This is very effective and provides a high level of protection to the public and early warning if the bridge becomes unsafe. However, the process can be expensive and time consuming, requiring the installation of sensors and data acquisition electronics on the bridge. This paper investigates the use of an instrumented 2-axle vehicle fitted with accelerometers to monitor the dynamic behaviour of a bridge network in a simple and cost-effective manner. A simplified half car-beam interaction model is used to simulate the passage of a vehicle over a bridge. This investigation involves the frequency domain analysis of the axle accelerations as the vehicle crosses the bridge. The spectrum of the acceleration record contains noise, vehicle, bridge and road frequency components. Therefore, the bridge dynamic behaviour is monitored in simulations for both smooth and rough road surfaces. The vehicle mass and axle spacing are varied in simulations along with bridge structural damping in order to analyse the sensitivity of the vehicle accelerations to a change in bridge properties. These vehicle accelerations can be obtained for different periods of time and serve as a useful tool to monitor the variation of bridge frequency and damping with time.
Resumo:
Camera traps are used to estimate densities or abundances using capture-recapture and, more recently, random encounter models (REMs). We deploy REMs to describe an invasive-native species replacement process, and to demonstrate their wider application beyond abundance estimation. The Irish hare Lepus timidus hibernicus is a high priority endemic of conservation concern. It is threatened by an expanding population of non-native, European hares L. europaeus, an invasive species of global importance. Camera traps were deployed in thirteen 1 km squares, wherein the ratio of invader to native densities were corroborated by night-driven line transect distance sampling throughout the study area of 1652 km2. Spatial patterns of invasive and native densities between the invader’s core and peripheral ranges, and native allopatry, were comparable between methods. Native densities in the peripheral range were comparable to those in native allopatry using REM, or marginally depressed using Distance Sampling. Numbers of the invader were substantially higher than the native in the core range, irrespective of method, with a 5:1 invader-to-native ratio indicating species replacement. We also describe a post hoc optimization protocol for REM which will inform subsequent (re-)surveys, allowing survey effort (camera hours) to be reduced by up to 57% without compromising the width of confidence intervals associated with density estimates. This approach will form the basis of a more cost-effective means of surveillance and monitoring for both the endemic and invasive species. The European hare undoubtedly represents a significant threat to the endemic Irish hare.
Resumo:
The UK’s transportation network is supported by critical geotechnical assets (cuttings/embankments/dams) that require sustainable, cost-effective management, while maintaining an appropriate service level to meet social, economic, and environmental needs. Recent effects of extreme weather on these geotechnical assets have highlighted their vulnerability to climate variations. We have assessed the potential of surface wave data to portray the climate-related variations in mechanical properties of a clay-filled railway embankment. Seismic data were acquired bimonthly from July 2013 to November 2014 along the crest of a heritage railway embankment in southwest England. For each acquisition, the collected data were first processed to obtain a set of Rayleigh-wave dispersion and attenuation curves, referenced to the same spatial locations. These data were then analyzed to identify a coherent trend in their spatial and temporal variability. The relevance of the observed temporal variations was also verified with respect to the experimental data uncertainties. Finally, the surface wave dispersion data sets were inverted to reconstruct a time-lapse model of S-wave velocity for the embankment structure, using a least-squares laterally constrained inversion scheme. A key point of the inversion process was constituted by the estimation of a suitable initial model and the selection of adequate levels of spatial regularization. The initial model and the strength of spatial smoothing were then kept constant throughout the processing of all available data sets to ensure homogeneity of the procedure and comparability among the obtained VS sections. A continuous and coherent temporal pattern of surface wave data, and consequently of the reconstructed VS models, was identified. This pattern is related to the seasonal distribution of precipitation and soil water content measured on site.
Resumo:
The challenge on implementation of the EU Water Framework Directive (WFD) fosters the development of new monitoring methods and approaches. It is now commonly accepted that the use of classical monitoring campaigns in discrete point is not sufficient to fully assess and describe a water body. Due to this the WFD promote the use of modelling techniques in surface waters to assist all phases of the process, from characterisation and establishment of reference conditions to identification of pressures and assessment of impact. The work presented in this communication is based on these principles. A classical monitoring of the water status of the main transitional water bodies of Algarve (south of Portugal) is combined with advanced in situ water profiling and hydrodynamic, water quality and ecological modelling of the systems to build a complete description of its state. This approach extends spatially and temporally the resolution of the classical point sampling. The methodology was applied during a 12 month program in Ria Formosa coastal lagoon, the Guadiana estuary and the Arade estuary. The synoptic profiling uses an YSI 6600 EDS multi-parameter system attached to a boat and a GPS receiver to produce monthly synoptic maps of the systems. This data extends the discrete point sampling with laboratory analysis performed monthly in several points of each water body. The point sampling is used to calibrate the profiling system and to include variables, such as nutrients, not measured by the sensors. A total of 1427 samplings were performed for physical and chemical parameters, chlorophyll and microbiologic contamination in the water column. This data is used to drive the hydrodynamic, transport and ecological modules of the MOHID water modelling system (www.mohid.com), enabling an integrate description of the water column.
Resumo:
Tese de doutoramento (co-tutela), Psicologia (Psicologia da Educação), Faculdade de Psicologia da Universidade de Lisboa, Faculdade de Psicologia e de Ciências da Educação da Universidade de Coimbra, Technial University of Darmstadt, 2014
Resumo:
This paper presents a low complexity high efficiency decimation filter which can be employed in EletroCardioGram (ECG) acquisition systems. The decimation filter with a decimation ratio of 128 works along with a third order sigma delta modulator. It is designed in four stages to reduce cost and power consumption. The work reported here provides an efficient approach for the decimation process for high resolution biomedical data conversion applications by employing low complexity two-path all-pass based decimation filters. The performance of the proposed decimation chain was validated by using the MIT-BIH arrhythmia database and comparative simulations were conducted with the state of the art.
Resumo:
Waste oil recycling companies play a very important role in our society. Competition among companies is tough and process optimization is essential for survival. By equipping oil containers with a level monitoring system that periodically reports the level and alerts when it reaches the preset threshold, the oil recycling companies are able to streamline the oil collection process and, thus, reduce the operation costs while maintaining the quality of service. This paper describes the development of this level monitoring system by a team of four students from different engineering backgrounds and nationalities. The team conducted a study of the state of the art, draw marketing and sustainable development plans and, finally, designed and implemented a prototype that continuously measures the container content level and sends an alert message as soon as it reaches the preset capacity.
Resumo:
In this work, a comparative study on different drill point geometries and feed rate for composite laminates drilling is presented. For this goal, thrust force monitoring during drilling, hole wall roughness measurement and delamination extension assessment after drilling is accomplished. Delamination is evaluated using enhanced radiography combined with a dedicated computational platform that integrates algorithms of image processing and analysis. An experimental procedure was planned and consequences were evaluated. Results show that a cautious combination of the factors involved, like drill tip geometry or feed rate, can promote the reduction of delamination damage.
Resumo:
Mathematical models and statistical analysis are key instruments in soil science scientific research as they can describe and/or predict the current state of a soil system. These tools allow us to explore the behavior of soil related processes and properties as well as to generate new hypotheses for future experimentation. A good model and analysis of soil properties variations, that permit us to extract suitable conclusions and estimating spatially correlated variables at unsampled locations, is clearly dependent on the amount and quality of data and of the robustness techniques and estimators. On the other hand, the quality of data is obviously dependent from a competent data collection procedure and from a capable laboratory analytical work. Following the standard soil sampling protocols available, soil samples should be collected according to key points such as a convenient spatial scale, landscape homogeneity (or non-homogeneity), land color, soil texture, land slope, land solar exposition. Obtaining good quality data from forest soils is predictably expensive as it is labor intensive and demands many manpower and equipment both in field work and in laboratory analysis. Also, the sampling collection scheme that should be used on a data collection procedure in forest field is not simple to design as the sampling strategies chosen are strongly dependent on soil taxonomy. In fact, a sampling grid will not be able to be followed if rocks at the predicted collecting depth are found, or no soil at all is found, or large trees bar the soil collection. Considering this, a proficient design of a soil data sampling campaign in forest field is not always a simple process and sometimes represents a truly huge challenge. In this work, we present some difficulties that have occurred during two experiments on forest soil that were conducted in order to study the spatial variation of some soil physical-chemical properties. Two different sampling protocols were considered for monitoring two types of forest soils located in NW Portugal: umbric regosol and lithosol. Two different equipments for sampling collection were also used: a manual auger and a shovel. Both scenarios were analyzed and the results achieved have allowed us to consider that monitoring forest soil in order to do some mathematical and statistical investigations needs a sampling procedure to data collection compatible to established protocols but a pre-defined grid assumption often fail when the variability of the soil property is not uniform in space. In this case, sampling grid should be conveniently adapted from one part of the landscape to another and this fact should be taken into consideration of a mathematical procedure.