986 resultados para gas test
Resumo:
The upstream oil and gas industry has been contending with massive data sets and monolithic files for many years, but “Big Data” is a relatively new concept that has the potential to significantly re-shape the industry. Despite the impressive amount of value that is being realized by Big Data technologies in other parts of the marketplace, however, much of the data collected within the oil and gas sector tends to be discarded, ignored, or analyzed in a very cursory way. This viewpoint examines existing data management practices in the upstream oil and gas industry, and compares them to practices and philosophies that have emerged in organizations that are leading the way in Big Data. The comparison shows that, in companies that are widely considered to be leaders in Big Data analytics, data is regarded as a valuable asset—but this is usually not true within the oil and gas industry insofar as data is frequently regarded there as descriptive information about a physical asset rather than something that is valuable in and of itself. The paper then discusses how the industry could potentially extract more value from data, and concludes with a series of policy-related questions to this end.
Resumo:
Automotive interactive technologies represent an exemplar challenge for user experience (UX) designers, as the concerns for aesthetics, functionality and usability add up to the compelling issues of safety and cognitive demand. This extended abstract presents a methodology for the user-centred creation and evaluation of novel in-car applications, involving real users in realistic use settings. As a case study, we present the methodologies of an ideation workshop in a simulated environment and the evaluation of six design idea prototypes for in-vehicle head up display (HUD) applications using a semi-naturalistic drive. Both methods rely on video recordings of real traffic situations that the users are familiar with and/or experienced themselves. The extended abstract presents experiences and results from the evaluation and reflection on our methods.
Resumo:
Nowadays, demand for automated Gas metal arc welding (GMAW) is growing and consequently need for intelligent systems is increased to ensure the accuracy of the procedure. To date, welding pool geometry has been the most used factor in quality assessment of intelligent welding systems. But, it has recently been found that Mahalanobis Distance (MD) not only can be used for this purpose but also is more efficient. In the present paper, Artificial Neural Networks (ANN) has been used for prediction of MD parameter. However, advantages and disadvantages of other methods have been discussed. The Levenberg–Marquardt algorithm was found to be the most effective algorithm for GMAW process. It is known that the number of neurons plays an important role in optimal network design. In this work, using trial and error method, it has been found that 30 is the optimal number of neurons. The model has been investigated with different number of layers in Multilayer Perceptron (MLP) architecture and has been shown that for the aim of this work the optimal result is obtained when using MLP with one layer. Robustness of the system has been evaluated by adding noise into the input data and studying the effect of the noise in prediction capability of the network. The experiments for this study were conducted in an automated GMAW setup that was integrated with data acquisition system and prepared in a laboratory for welding of steel plate with 12 mm in thickness. The accuracy of the network was evaluated by Root Mean Squared (RMS) error between the measured and the estimated values. The low error value (about 0.008) reflects the good accuracy of the model. Also the comparison of the predicted results by ANN and the test data set showed very good agreement that reveals the predictive power of the model. Therefore, the ANN model offered in here for GMA welding process can be used effectively for prediction goals.
Resumo:
Due to the increasing recognition of global climate change, the building and construction industry is under pressure to reduce carbon emissions. A central issue in striving towards reduced carbon emissions is the need for a practicable and meaningful yardstick for assessing and communicating greenhouse gas (GHG) results. ISO 14067 was published by the International Organization for Standardization in May 2013. By providing specific requirements in the life cycle assessment (LCA) approach, the standard clarifies the GHG assessment in the aspects of choosing system boundaries and simulating use and end-of-life phases when quantifying carbon footprint of products (CFPs). More importantly, the standard, for the first time, provides step-to-step guidance and standardized template for communicating CFPs in the form of CFP external communication report, CFP performance tracking report, CFP declaration and CFP label. ISO 14067 therefore makes a valuable contribution to GHG quantification and transparent communication and comparison of CFPs. In addition, as cradle-to-grave should be used as the system boundary if use and end-of-life phases can be simulated, ISO 14067 will hopefully promote the development and implementation of simulation technologies, with Building Information Modelling (BIM) in particular, in the building and construction industry.
Resumo:
The products evolved during the thermal decomposition of the coal-derived pyrite/marcasite were studied using simultaneous thermogravimetry coupled with Fourier-transform infrared spectroscopy and mass spectrometry (TG-FTIR–MS) technique. The main gases and volatile products released during the thermal decomposition of the coal-derived pyrite/marcasite are water (H2O), carbon dioxide (CO2), and sulfur dioxide (SO2). The results showed that the evolved products obtained were mainly divided into two processes: (1) the main evolved product H2O is mainly released at below 300 °C; (2) under the temperature of 450–650 °C, the main evolved products are SO2 and small amount of CO2. It is worth mentioning that SO3 was not observed as a product as no peak was observed in the m/z = 80 curve. The chemical substance SO2 is present as the main gaseous product in the thermal decomposition for the sample. The coal-derived pyrite/marcasite is different from mineral pyrite in thermal decomposition temperature. The mass spectrometric analysis results are in good agreement with the infrared spectroscopic analysis of the evolved gases. These results give the evidence on the thermal decomposition products and make all explanations have the sufficient evidence. Therefore, TG–MS–IR is a powerful tool for the investigation of gas evolution from the thermal decomposition of materials.
Resumo:
Many researchers in the field of civil structural health monitoring (SHM) have developed and tested their methods on simple to moderately complex laboratory structures such as beams, plates, frames, and trusses. Fieldwork has also been conducted by many researchers and practitioners on more complex operating bridges. Most laboratory structures do not adequately replicate the complexity of truss bridges. Informed by a brief review of the literature, this paper documents the design and proposed test plan of a structurally complex laboratory bridge model that has been specifically designed for the purpose of SHM research. Preliminary results have been presented in the companion paper.
Resumo:
Background The capacity to diagnosys, quantify and evaluate movement beyond the general confines of a clinical environment under effectiveness conditions may alleviate rampant strain on limited, expensive and highly specialized medical resources. An iPhone 4® mounted a three dimensional accelerometer subsystem with highly robust software applications. The present study aimed to evaluate the reliability and concurrent criterion-related validity of the accelerations with an iPhone 4® in an Extended Timed Get Up and Go test. Extended Timed Get Up and Go is a clinical test with that the patient get up from the chair and walking ten meters, turn and coming back to the chair. Methods A repeated measure, cross-sectional, analytical study. Test-retest reliability of the kinematic measurements of the iPhone 4® compared with a standard validated laboratory device. We calculated the Coefficient of Multiple Correlation between the two sensors acceleration signal of each subject, in each sub-stage, in each of the three Extended Timed Get Up and Go test trials. To investigate statistical agreement between the two sensors we used the Bland-Altman method. Results With respect to the analysis of the correlation data in the present work, the Coefficient of Multiple Correlation of the five subjects in their triplicated trials were as follows: in sub-phase Sit to Stand the ranged between r = 0.991 to 0.842; in Gait Go, r = 0.967 to 0.852; in Turn, 0.979 to 0.798; in Gait Come, 0.964 to 0.887; and in Turn to Stand to Sit, 0.992 to 0.877. All the correlations between the sensors were significant (p < 0.001). The Bland-Altman plots obtained showed a solid tendency to stay at close to zero, especially on the y and x-axes, during the five phases of the Extended Timed Get Up and Go test. Conclusions The inertial sensor mounted in the iPhone 4® is sufficiently reliable and accurate to evaluate and identify the kinematic patterns in an Extended Timed Get and Go test. While analysis and interpretation of 3D kinematics data continue to be dauntingly complex, the iPhone 4® makes the task of acquiring the data relatively inexpensive and easy to use.
Resumo:
Background Balance dysfunction is one of the most common problems in people who suffer stroke. To parameterize functional tests standardized by inertial sensors have been promoted in applied medicine. The aim of this study was to compare the kinematic variables of the Functional Reach Test (FRT) obtained by two inertial sensors placed on the trunk and lumbar region between stroke survivors (SS) and healthy older adults (HOA) and to analyze the reliability of the kinematic measurements obtained. Methods Cross-sectional study. Five SS and five HOA over 65. A descriptive analysis of the average range as well as all kinematic variables recorded was developed. The intrasubject and intersubject reliability of the measured variables was directly calculated. Results In the same intervals, the angular displacement was greater in the HOA group; however, they were completed at similar times for both groups, and HOA conducted the test at a higher speed and greater acceleration in each of the intervals. The SS values were higher than HOA values in the maximum and minimum acceleration in the trunk and in the lumbar region. Conclusions The SS show less functional reach, a narrower, slower and less accelerated movement during the FRT execution, but with higher peaks of acceleration and speed when they are compared with HOA.
Resumo:
Purpose To evaluate the influence of cone location and corneal cylinder on RGP corrected visual acuities and residual astigmatism in patients with keratoconus. Methods In this prospective study, 156 eyes from 134 patients were enrolled. Complete ophthalmologic examination including manifest refraction, Best spectacle visual acuity (BSCVA), slit-lamp biomicroscopy was performed and corneal topography analysis was done. According to the cone location on the topographic map, the patients were divided into central and paracentral cone groups. Trial RGP lenses were selected based on the flat Sim K readings and a ‘three-point touch’ fitting approach was used. Over contact lens refraction was performed, residual astigmatism (RA) was measured and best-corrected RGP visual acuities (RGPVA) were recorded. Results The mean age (±SD) was 22.1 ± 5.3 years. 76 eyes (48.6%) had central and 80 eyes (51.4%) had paracentral cone. Prior to RGP lenses fitting mean (±SD) subjective refraction spherical equivalent (SRSE), subjective refraction astigmatism (SRAST) and BSCVA (logMAR) were −5.04 ± 2.27 D, −3.51 ± 1.68 D and 0.34 ± 0.14, respectively. There were statistically significant differences between central and paracentral cone groups in mean values of SRSE, SRAST, flat meridian (Sim K1), steep meridian (Sim K2), mean K and corneal cylinder (p-values < 0.05). Comparison of BSCVA to RGPVA shows that vision has improved 0.3 logMAR by RGP lenses (p < 0.0001). Mean (±SD) RA was −0.72 ± 0.39 D. There were no statistically significant differences between RGPVAs and RAs of central and paracentral cone groups (p = 0.22) and (p = 0.42), respectively. Pearson's correlation analysis shows that there is a statistically significant relationship between corneal cylinder and BSCVA and RGPVA, However, the relationship between corneal cylinder and residual astigmatism was not significant. Conclusions Cone location has no effect on the RGP corrected visual acuities and residual astigmatism in patients with keratoconus. Corneal cylinder and Sim K values influence RGP-corrected visual acuities but do not influence residual astigmatism.
Resumo:
Background and purpose There are no published studies on the parameterisation and reliability of the single-leg stance (SLS) test with inertial sensors in stroke patients. Purpose: to analyse the reliability (intra-observer/inter-observer) and sensitivity of inertial sensors used for the SLS test in stroke patients. Secondary objective: to compare the records of the two inertial sensors (trunk and lumbar) to detect any significant differences in the kinematic data obtained in the SLS test. Methods Design: cross-sectional study. While performing the SLS test, two inertial sensors were placed at lumbar (L5-S1) and trunk regions (T7–T8). Setting: Laboratory of Biomechanics (Health Science Faculty - University of Málaga). Participants: Four chronic stroke survivors (over 65 yrs old). Measurement: displacement and velocity, Rotation (X-axis), Flexion/Extension (Y-axis), Inclination (Z-axis); Resultant displacement and velocity (V): RV=(Vx2+Vy2+Vz2)−−−−−−−−−−−−−−−−−√ Along with SLS kinematic variables, descriptive analyses, differences between sensors locations and intra-observer and inter-observer reliability were also calculated. Results Differences between the sensors were significant only for left inclination velocity (p = 0.036) and extension displacement in the non-affected leg with eyes open (p = 0.038). Intra-observer reliability of the trunk sensor ranged from 0.889-0.921 for the displacement and 0.849-0.892 for velocity. Intra-observer reliability of the lumbar sensor was between 0.896-0.949 for the displacement and 0.873-0.894 for velocity. Inter-observer reliability of the trunk sensor was between 0.878-0.917 for the displacement and 0.847-0.884 for velocity. Inter-observer reliability of the lumbar sensor ranged from 0.870-0.940 for the displacement and 0.863-0.884 for velocity. Conclusion There were no significant differences between the kinematic records made by an inertial sensor during the development of the SLS testing between two inertial sensors placed in the lumbar and thoracic regions. In addition, inertial sensors. Have the potential to be reliable, valid and sensitive instruments for kinematic measurements during SLS testing but further research is needed.
Resumo:
Background The diagnosis of frailty is based on physical impairments and clinicians have indicated that early detection is one of the most effective methods for reducing the severity of physical frailty. Maybe, an alternative to the classical diagnosis could be the instrumentalization of classical functional testing, as Romberg test or Timed Get Up and Go Test. The aim of this study was (I) to measure and describe the magnitude of accelerometry values in the Romberg test in two groups of frail and non-frail elderly people through instrumentation with the iPhone 4®, (II) to analyse the performances and differences between the study groups, and (III) to analyse the performances and differences within study groups to characterise accelerometer responses to increasingly difficult challenges to balance. Methods This is a cross-sectional study of 18 subjects over 70 years old, 9 frail subjects and 9 non-frail subjects. The non-parametric Mann–Whitney U test was used for between-group comparisons in means values derived from different tasks. The Wilcoxon Signed-Rank test was used to analyse differences between different variants of the test in both independent study groups. Results The highest difference between groups was found in the accelerometer values with eyes closed and feet parallel: maximum peak acceleration in the lateral axis (p < 0.01), minimum peak acceleration in the lateral axis (p < 0.01) and minimum peak acceleration from the resultant vector (p < 0.01). Subjects with eyes open and feet parallel, greatest differences found between the groups were in the maximum peak acceleration in the lateral axis (p < 0.01), minimum peak acceleration in the lateral axis (p < 0.01) and minimum peak acceleration from the resultant vector (p < 0.001). With eyes closed and feet in tandem, the greatest differences found between the groups were in the minimum peak acceleration in the lateral axis (p < 0.01). Conclusions The accelerometer fitted in the iPhone 4® is able to study and analyse the kinematics of the Romberg test between frail and non-frail elderly people. In addition, the results indicate that the accelerometry values also were significantly different between the frail and non-frail groups, and that values from the accelerometer accelerometer increased as the test was made more complicated.
Resumo:
We review studies of Nelson's (1976) Modified Card Sorting Test (MCST) that have examined the performance of subjects with frontal lobe dysfunction. Six studies investigated the performance of normal controls and patients with frontal lobe dysfunction, whereas four studies compared the performance of frontal and nonfrontal patients. One further study compared the performance of amnesic patients both on the MCST and on the original Wisconsin Card Sorting Test (WCST). Evidence regarding the MCST's differential sensitivity to frontal lobe dysfunction is weak, as is the evidence regarding the equivalence of the MCST and WCST. It is likely that the MCST is an altogether different test from the standard version. In the absence of proper normative data for the MCST, we provide a table of scores derived from the control groups of various studies. Given the paucity of evidence, further research is required before the MCST can be recommended for use as a marker of frontal lobe dysfunction.
Resumo:
Objectives. To investigate the test-retest stability of a standardized version of Nelson's (1976) Modified Card Sorting Test (MCST) and its relationships with demographic variables in a sample of healthy older adults. Design. A standard card order and administration were devised for the MCST and administered to participants at an initial assessment, and again at a second session conducted a minimum of six months later in order to examine its test-retest stability. Participants were also administered the WAIS-R at initial assessment in order to provide a measure of psychometric intelligence. Methods. Thirty-six (24 female, 12 male) healthy older adults aged 52 to 77 years with mean education 12.42 years (SD = 3.53) completed the MCST on two occasions approximately 7.5 months (SD = 1.61) apart. Stability coefficients and test-retest differences were calculated for the range of scores. The effect of gender on MCST performance was examined. Correlations between MCST scores and age, education and WAIS-R IQs were also determined. Results. Stability coefficients ranged from .26 for the percent perseverative errors measure to .49 for the failure to maintain set measure. Several measures were significantly correlated with age, education and WAIS-R IQs, although no effect of gender on MCST performance was found. Conclusions. None of the stability coefficients reached the level required for clinical decision making. The results indicate that participants' age, education, and intelligence need to be considered when interpreting MCST performance. Normative studies of MCST performance as well as further studies with patients with executive dysfunction are needed.
Resumo:
The human connectome has recently become a popular research topic in neuroscience, and many new algorithms have been applied to analyze brain networks. In particular, network topology measures from graph theory have been adapted to analyze network efficiency and 'small-world' properties. While there has been a surge in the number of papers examining connectivity through graph theory, questions remain about its test-retest reliability (TRT). In particular, the reproducibility of structural connectivity measures has not been assessed. We examined the TRT of global connectivity measures generated from graph theory analyses of 17 young adults who underwent two high-angular resolution diffusion (HARDI) scans approximately 3 months apart. Of the measures assessed, modularity had the highest TRT, and it was stable across a range of sparsities (a thresholding parameter used to define which network edges are retained). These reliability measures underline the need to develop network descriptors that are robust to acquisition parameters.
Resumo:
A study was undertaken to examine further the effects of perceived work control on employee adjustment. On the basis of the stress antidote model, it was proposed that high levels of prediction, understanding, and control of work-related events would have direct, indirect, and interactive effects on levels of employee adjustment. These hypotheses were tested in a short-term longitudinal study of 137 employees of a large retail organization. The stress antidote measures appeared to be indirectly related to employee adjustment, via their effects on perceptions of work stress. There was weak evidence for the proposal that prediction, understanding, and control would buffer the negative effects of work stress. Additional analyses indicated that the observed effects of prediction, understanding, and control were independent of employees' generalized control beliefs. However, there was no support for the proposal that the effects of the stress antidote measures would be dependent on employees' generalized control beliefs.