317 resultados para fibre diameter measurement


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The measurement error model is a well established statistical method for regression problems in medical sciences, although rarely used in ecological studies. While the situations in which it is appropriate may be less common in ecology, there are instances in which there may be benefits in its use for prediction and estimation of parameters of interest. We have chosen to explore this topic using a conditional independence model in a Bayesian framework using a Gibbs sampler, as this gives a great deal of flexibility, allowing us to analyse a number of different models without losing generality. Using simulations and two examples, we show how the conditional independence model can be used in ecology, and when it is appropriate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of system use has suffered from a "too simplistic definition" (DeLone and McLean [9], p. 16). This paper reviews various attempts at conceptualization and measurement of system use and then proposes a re-conceptualization of it as "the level of incorporation of an information system within a user's processes." We then go on to develop the concept of a Functional Interface Point and four dimensions of system usage: automation level, the proportion of the business process encoded by the information system; extent, the proportion of the FIPs used by the business process; frequency, the rate at which FIPs are used by the participants in the process; and thoroughness, the level of use of information/functionality provided by the system at an FIP. The article concludes with a discussion of some implications of this re-conceptualization and areas for follow on research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Organizations invest heavily in Customer Relationship Management (CRM) and Supply Chain Management (SCM) systems, and their related infrastructure, presumably expecting positive benefits to the organization. Assessing the benefits of such systems is an important aspect of managing such systems. Given the substantial differences between CRM and SCM systems with traditional intra-organizational applications, existing Information Systems benefits measurement models and frameworks are ill-suited to gauge CRM and SCM benefits. This paper reports the preliminary findings of a research that seeks to develop a measurement model to assess benefits of CRM and SCM applications. The a-priori benefits measurement model is developed reviewing the 55 academic studies and 40 practitioner papers. The review of related literature yielded 606 benefits, which were later synthesized into 74 mutually exclusive benefit measures of CRM and SCM applications arranged under five dimensions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: To compare the intraocular pressure readings obtained with the iCare rebound tonometer and the 7CR non-contact tonometer with those measured by Goldmann applanation tonometry in treated glaucoma patients. Design: A prospective, cross sectional study was conducted in a private tertiary glaucoma clinic. Participants: 109 (54M:55F) patients including only eyes under medical treatment for glaucoma. Methods: Measurement by Goldmann applanation tonometry, iCare rebound tonometry and 7CR non-contact tonometry. Main Outcome Measures: Intraocular pressure. Results: There were strong correlations between the intraocular pressure measurements obtained with Goldmann and both the rebound and non-contact tonometers (Spearman r values ≥ 0.79, p < 0.001). However, there were small, statistically significant differences between the average readings for each tonometer. For the rebound tonometer, the mean intraocular pressure was slightly higher compared to the Goldmann applanation tonometer in the right eyes (p = 0.02), and similar in the left eyes (p = 0.93) however these differences did not reach statistical significance. The Goldmann correlated measurements from the noncontact tonometer were lower than the average Goldmann reading for both right (p < 0.001) and left (p > 0.01) eyes. The corneal compensated measurements from the non-contact tonometer were significantly higher compared to the other tonometers (p ≤ 0.001). Conclusions: The iCare rebound tonometer and the 7CR non-contact tonometer measure IOP in fundamentally different ways to the Goldmann applanation tonometer. The resulting IOP values vary between the instruments and will need to be considered when comparing clinical versus home acquired measurements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In many bridges, vertical displacements are one of the most relevant parameters for structural health monitoring in both the short and long terms. Bridge managers around the globe are always looking for a simple way to measure vertical displacements of bridges. However, it is difficult to carry out such measurements. On the other hand, in recent years, with the advancement of fiber-optic technologies, fiber Bragg grating (FBG) sensors are more commonly used in structural health monitoring due to their outstanding advantages including multiplexing capability, immunity of electromagnetic interference as well as high resolution and accuracy. For these reasons, using FBG sensors is proposed to develop a simple, inexpensive and practical method to measure vertical displacements of bridges. A curvature approach for vertical displacement measurement using curvature measurements is proposed. In addition, with the successful development of a FBG tilt sensors, an inclination approach is also proposed using inclination measurements. A series of simulation tests of a full-scale bridge was conducted. It shows that both the approaches can be implemented to determine vertical displacements for bridges with various support conditions, varying stiffness (EI) along the spans and without any prior known loading. These approaches can thus measure vertical displacements for most of slab-on-girder and box-girder bridges. Moreover, with the advantages of FBG sensors, they can be implemented to monitor bridge behavior remotely and in real time. Further recommendations of these approaches for developments will also be discussed at the end of the paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Metallic materials exposed to oxygen-enriched atmospheres – as commonly used in the medical, aerospace, aviation and numerous chemical processing industries – represent a significant fire hazard which must be addressed during design, maintenance and operation. Hence, accurate knowledge of metallic materials flammability is required. Reduced gravity (i.e. space-based) operations present additional unique concerns, where the absence of gravity must also be taken into account. The flammability of metallic materials has historically been quantified using three standardised test methods developed by NASA, ASTM and ISO. These tests typically involve the forceful (promoted) ignition of a test sample (typically a 3.2 mm diameter cylindrical rod) in pressurised oxygen. A test sample is defined as flammable when it undergoes burning that is independent of the ignition process utilised. In the standardised tests, this is indicated by the propagation of burning further than a defined amount, or „burn criterion.. The burn criterion in use at the onset of this project was arbitrarily selected, and did not accurately reflect the length a sample must burn in order to be burning independent of the ignition event and, in some cases, required complete consumption of the test sample for a metallic material to be considered flammable. It has been demonstrated that a) a metallic material.s propensity to support burning is altered by any increase in test sample temperature greater than ~250-300 oC and b) promoted ignition causes an increase in temperature of the test sample in the region closest to the igniter, a region referred to as the Heat Affected Zone (HAZ). If a test sample continues to burn past the HAZ (where the HAZ is defined as the region of the test sample above the igniter that undergoes an increase in temperature of greater than or equal to 250 oC by the end of the ignition event), it is burning independent of the igniter, and should be considered flammable. The extent of the HAZ, therefore, can be used to justify the selection of the burn criterion. A two dimensional mathematical model was developed in order to predict the extent of the HAZ created in a standard test sample by a typical igniter. The model was validated against previous theoretical and experimental work performed in collaboration with NASA, and then used to predict the extent of the HAZ for different metallic materials in several configurations. The extent of HAZ predicted varied significantly, ranging from ~2-27 mm depending on the test sample thermal properties and test conditions (i.e. pressure). The magnitude of the HAZ was found to increase with increasing thermal diffusivity, and decreasing pressure (due to slower ignition times). Based upon the findings of this work, a new burn criterion requiring 30 mm of the test sample to be consumed (from the top of the ignition promoter) was recommended and validated. This new burn criterion was subsequently included in the latest revision of the ASTM G124 and NASA 6001B international test standards that are used to evaluate metallic material flammability in oxygen. These revisions also have the added benefit of enabling the conduct of reduced gravity metallic material flammability testing in strict accordance with the ASTM G124 standard, allowing measurement and comparison of the relative flammability (i.e. Lowest Burn Pressure (LBP), Highest No-Burn Pressure (HNBP) and average Regression Rate of the Melting Interface(RRMI)) of metallic materials in normal and reduced gravity, as well as determination of the applicability of normal gravity test results to reduced gravity use environments. This is important, as currently most space-based applications will typically use normal gravity information in order to qualify systems and/or components for reduced gravity use. This is shown here to be non-conservative for metallic materials which are more flammable in reduced gravity. The flammability of two metallic materials, Inconel® 718 and 316 stainless steel (both commonly used to manufacture components for oxygen service in both terrestrial and space-based systems) was evaluated in normal and reduced gravity using the new ASTM G124-10 test standard. This allowed direct comparison of the flammability of the two metallic materials in normal gravity and reduced gravity respectively. The results of this work clearly show, for the first time, that metallic materials are more flammable in reduced gravity than in normal gravity when testing is conducted as described in the ASTM G124-10 test standard. This was shown to be the case in terms of both higher regression rates (i.e. faster consumption of the test sample – fuel), and burning at lower pressures in reduced gravity. Specifically, it was found that the LBP for 3.2 mm diameter Inconel® 718 and 316 stainless steel test samples decreased by 50% from 3.45 MPa (500 psia) in normal gravity to 1.72 MPa (250 psia) in reduced gravity for the Inconel® 718, and 25% from 3.45 MPa (500 psia) in normal gravity to 2.76 MPa (400 psia) in reduced gravity for the 316 stainless steel. The average RRMI increased by factors of 2.2 (27.2 mm/s in 2.24 MPa (325 psia) oxygen in reduced gravity compared to 12.8 mm/s in 4.48 MPa (650 psia) oxygen in normal gravity) for the Inconel® 718 and 1.6 (15.0 mm/s in 2.76 MPa (400 psia) oxygen in reduced gravity compared to 9.5 mm/s in 5.17 MPa (750 psia) oxygen in normal gravity) for the 316 stainless steel. Reasons for the increased flammability of metallic materials in reduced gravity compared to normal gravity are discussed, based upon the observations made during reduced gravity testing and previous work. Finally, the implications (for fire safety and engineering applications) of these results are presented and discussed, in particular, examining methods for mitigating the risk of a fire in reduced gravity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fibre composite structures have become the most attractive candidate for civil engineering applications. Fibre reinforced plastic polymer (FRP) composite materials have been used in the rehabilitation and replacement of the old degrading traditional structures or build new structures. However, the lack of design standards for civil infrastructure limits their structural applications. The majority of the existing applications have been designed based on the research and guidelines provided by the fibre composite manufacturers or based on the designer’s experience. It has been a tendency that the final structure is generally over-designed. This paper provides a review on the available studies related to the design optimization of fibre composite structures used in civil engineering such as; plate, beam, box beam, sandwich panel, bridge girder, and bridge deck. Various optimization methods are presented and compared. In addition, the importance of using the appropriate optimization technique is discussed. An improved methodology, which considering experimental testing, numerical modelling, and design constrains, is proposed in the paper for design optimization of composite structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Increasing global competitiveness worldwide has forced manufacturing organizations to produce high-quality products more quickly and at a competitive cost. In order to reach these goals, they need good quality components from suppliers at optimum price and lead time. This actually forced all the companies to adapt different improvement practices such as lean manufacturing, Just in Time (JIT) and effective supply chain management. Applying new improvement techniques and tools cause higher establishment costs and more Information Delay (ID). On the contrary, these new techniques may reduce the risk of stock outs and affect supply chain flexibility to give a better overall performance. But industry people are unable to measure the overall affects of those improvement techniques with a standard evaluation model .So an effective overall supply chain performance evaluation model is essential for suppliers as well as manufacturers to assess their companies under different supply chain strategies. However, literature on lean supply chain performance evaluation is comparatively limited. Moreover, most of the models assumed random values for performance variables. The purpose of this paper is to propose an effective supply chain performance evaluation model using triangular linguistic fuzzy numbers and to recommend optimum ranges for performance variables for lean implementation. The model initially considers all the supply chain performance criteria (input, output and flexibility), converts the values to triangular linguistic fuzzy numbers and evaluates overall supply chain performance under different situations. Results show that with the proposed performance measurement model, improvement area for each variable can be accurately identified.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Vehicle emitted particles are of significant concern based on their potential to influence local air quality and human health. Transport microenvironments usually contain higher vehicle emission concentrations compared to other environments, and people spend a substantial amount of time in these microenvironments when commuting. Currently there is limited scientific knowledge on particle concentration, passenger exposure and the distribution of vehicle emissions in transport microenvironments, partially due to the fact that the instrumentation required to conduct such measurements is not available in many research centres. Information on passenger waiting time and location in such microenvironments has also not been investigated, which makes it difficult to evaluate a passenger’s spatial-temporal exposure to vehicle emissions. Furthermore, current emission models are incapable of rapidly predicting emission distribution, given the complexity of variations in emission rates that result from changes in driving conditions, as well as the time spent in driving condition within the transport microenvironment. In order to address these scientific gaps in knowledge, this work conducted, for the first time, a comprehensive statistical analysis of experimental data, along with multi-parameter assessment, exposure evaluation and comparison, and emission model development and application, in relation to traffic interrupted transport microenvironments. The work aimed to quantify and characterise particle emissions and human exposure in the transport microenvironments, with bus stations and a pedestrian crossing identified as suitable research locations representing a typical transport microenvironment. Firstly, two bus stations in Brisbane, Australia, with different designs, were selected to conduct measurements of particle number size distributions, particle number and PM2.5 concentrations during two different seasons. Simultaneous traffic and meteorological parameters were also monitored, aiming to quantify particle characteristics and investigate the impact of bus flow rate, station design and meteorological conditions on particle characteristics at stations. The results showed higher concentrations of PN20-30 at the station situated in an open area (open station), which is likely to be attributed to the lower average daily temperature compared to the station with a canyon structure (canyon station). During precipitation events, it was found that particle number concentration in the size range 25-250 nm decreased greatly, and that the average daily reduction in PM2.5 concentration on rainy days compared to fine days was 44.2 % and 22.6 % at the open and canyon station, respectively. The effect of ambient wind speeds on particle number concentrations was also examined, and no relationship was found between particle number concentration and wind speed for the entire measurement period. In addition, 33 pairs of average half-hourly PN7-3000 concentrations were calculated and identified at the two stations, during the same time of a day, and with the same ambient wind speeds and precipitation conditions. The results of a paired t-test showed that the average half-hourly PN7-3000 concentrations at the two stations were not significantly different at the 5% confidence level (t = 0.06, p = 0.96), which indicates that the different station designs were not a crucial factor for influencing PN7-3000 concentrations. A further assessment of passenger exposure to bus emissions on a platform was evaluated at another bus station in Brisbane, Australia. The sampling was conducted over seven weekdays to investigate spatial-temporal variations in size-fractionated particle number and PM2.5 concentrations, as well as human exposure on the platform. For the whole day, the average PN13-800 concentration was 1.3 x 104 and 1.0 x 104 particle/cm3 at the centre and end of the platform, respectively, of which PN50-100 accounted for the largest proportion to the total count. Furthermore, the contribution of exposure at the bus station to the overall daily exposure was assessed using two assumed scenarios of a school student and an office worker. It was found that, although the daily time fraction (the percentage of time spend at a location in a whole day) at the station was only 0.8 %, the daily exposure fractions (the percentage of exposures at a location accounting for the daily exposure) at the station were 2.7% and 2.8 % for exposure to PN13-800 and 2.7% and 3.5% for exposure to PM2.5 for the school student and the office worker, respectively. A new parameter, “exposure intensity” (the ratio of daily exposure fraction and the daily time fraction) was also defined and calculated at the station, with values of 3.3 and 3.4 for exposure to PN13-880, and 3.3 and 4.2 for exposure to PM2.5, for the school student and the office worker, respectively. In order to quantify the enhanced emissions at critical locations and define the emission distribution in further dispersion models for traffic interrupted transport microenvironments, a composite line source emission (CLSE) model was developed to specifically quantify exposure levels and describe the spatial variability of vehicle emissions in traffic interrupted microenvironments. This model took into account the complexity of vehicle movements in the queue, as well as different emission rates relevant to various driving conditions (cruise, decelerate, idle and accelerate), and it utilised multi-representative segments to capture the accurate emission distribution for real vehicle flow. This model does not only helped to quantify the enhanced emissions at critical locations, but it also helped to define the emission source distribution of the disrupted steady flow for further dispersion modelling. The model then was applied to estimate particle number emissions at a bidirectional bus station used by diesel and compressed natural gas fuelled buses. It was found that the acceleration distance was of critical importance when estimating particle number emission, since the highest emissions occurred in sections where most of the buses were accelerating and no significant increases were observed at locations where they idled. It was also shown that emissions at the front end of the platform were 43 times greater than at the rear of the platform. The CLSE model was also applied at a signalled pedestrian crossing, in order to assess increased particle number emissions from motor vehicles when forced to stop and accelerate from rest. The CLSE model was used to calculate the total emissions produced by a specific number and mix of light petrol cars and diesel passenger buses including 1 car travelling in 1 direction (/1 direction), 14 cars / 1 direction, 1 bus / 1 direction, 28 cars / 2 directions, 24 cars and 2 buses / 2 directions, and 20 cars and 4 buses / 2 directions. It was found that the total emissions produced during stopping on a red signal were significantly higher than when the traffic moved at a steady speed. Overall, total emissions due to the interruption of the traffic increased by a factor of 13, 11, 45, 11, 41, and 43 for the above 6 cases, respectively. In summary, this PhD thesis presents the results of a comprehensive study on particle number and mass concentration, together with particle size distribution, in a bus station transport microenvironment, influenced by bus flow rates, meteorological conditions and station design. Passenger spatial-temporal exposure to bus emitted particles was also assessed according to waiting time and location along the platform, as well as the contribution of exposure at the bus station to overall daily exposure. Due to the complexity of the interrupted traffic flow within the transport microenvironments, a unique CLSE model was also developed, which is capable of quantifying emission levels at critical locations within the transport microenvironment, for the purpose of evaluating passenger exposure and conducting simulations of vehicle emission dispersion. The application of the CLSE model at a pedestrian crossing also proved its applicability and simplicity for use in a real-world transport microenvironment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Inter-Vehicular Communications (IVC) are considered a promising technological approach for enhancing transportation safety and improving highway efficiency. Previous theoretical work has demonstrated the benefits of IVC in vehicles strings. Simulations of partially IVC-equipped vehicles strings showed that only a small equipment ratio is sufficient to drastically reduce the number of head on collisions. However, these results are based on the assumptions that IVC exhibit lossless and instantaneous messages transmission. This paper presents the research design of an empirical measurement of a vehicles string, with the goal of highlighting the constraints introduced by the actual characteristics of communication devices. A warning message diffusion system based on IEEE 802.11 wireless technology was developed for an emergency breaking scenario. Preliminary results are presented as well, showing the latencies introduced by using 802.11a and discussing early findings and experimental limitations

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective To determine the test-retest reliability of measurements of thickness, fascicle length (Lf) and pennation angle (θ) of the vastus lateralis (VL) and gastrocnemius medialis (GM) muscles in older adults. Participants Twenty-one healthy older adults (11 men and ten women; average age 68·1 ± 5·2 years) participated in this study. Methods Ultrasound images (probe frequency 10 MHz) of the VL at two sites (VL site 1 and 2) were obtained with participants seated with knee at 90º flexion. For GM measures, participants lay prone with ankle fixed at 15º dorsiflexion. Measures were taken on two separate occasions, 7 days apart (T1 and T2). Results The ICCs (95% CI) were: VL site 1 thickness = 0·96(0·90–0·98); VL site 2 thickness = 0·96(0·90–0·98), VL θ = 0·87(0·68–0·95), VL Lf = 0·80(0·50–0·92), GM thickness = 0·97(0·92–0·99), GM θ = 0·85(0·62–0·94) and GM Lf =0·90(0·75–0·96). The 95% ratio limits of agreement (LOAs) for all measures, calculated by multiplying the standard deviation of the ratio of the results between T1 and T2 by 1·96, ranged from 10·59 to 38·01%. Conclusion The ability of these tests to determine a real change in VL and GM muscle architecture is good on a group level but problematic on an individual level as the relatively large 95% ratio LOAs in the current study may encompass the changes in architecture observed in other training studies. Therefore, the current findings suggest that B-mode ultrasonography can be used with confidence by researchers when investigating changes in muscle architecture in groups of older adults, but its use is limited in showing changes in individuals over time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study was to develop a reliable technique for measuring the area of a curved surface from an axial computed tomography (CT) scan and to apply this clinically in the measurement of articular cartilage surface area in acetabular fractures. The method used was a triangulation algorithm. In order to determine the accuracy of the technique, areas of hemispheres of known size were measured to give the percentage error in area measurement. Seven such hemispheres were machined into a Perspex block and their area measured geometrically, and also from CT scans by means of the triangulation algorithm. Scans of 1, 2 and 4 mm slice thickness and separation were used. The error varied with slice thickness and hemisphere diameter. It was shown that the 2 mm slice thickness provides the most accurate area measurement, while 1 mm cuts overestimate and 4 mm cuts underestimate the area. For a hemisphere diameter of 5 cm, which is of similar size to the acetabulum, the error was -11.2% for 4 mm cuts, +4.2% for 2 mm cuts and + 5.1% for 1 mm cuts. As expected, area measurement was more accurate for larger hemispheres. This method can be applied clinically to quantify acetabular fractures by measuring the percentage area of intact articular cartilage. In the case of both column fractures, the percentage area of secondary congruence can be determined. This technique of quantifying acetabular fractures has a potential clinical application as a prognostic factor and an indication for surgery in the long term.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The quality assurance of stereotactic radiotherapy and radiosurgery treatments requires the use of small-field dose measurements that can be experimentally challenging. This study used Monte Carlo simulations to establish that PAGAT dosimetry gel can be used to provide accurate, high resolution, three-dimensional dose measurements of stereotactic radiotherapy fields. A small cylindrical container (4 cm height, 4.2 cm diameter) was filled with PAGAT gel, placed in the parietal region inside a CIRS head phantom, and irradiated with a 12 field stereotactic radiotherapy plan. The resulting three-dimensional dose measurement was read out using an optical CT scanner and compared with the treatment planning prediction of the dose delivered to the gel during the treatment. A BEAMnrc DOSXYZnrc simulation of this treatment was completed, to provide a standard against which the accuracy of the gel measurement could be gauged. The three dimensional dose distributions obtained from Monte Carlo and from the gel measurement were found to be in better agreement with each other than with the dose distribution provided by the treatment planning system's pencil beam calculation. Both sets of data showed close agreement with the treatment planning system's dose distribution through the centre of the irradiated volume and substantial disagreement with the treatment planning system at the penumbrae. The Monte Carlo calculations and gel measurements both indicated that the treated volume was up to 3 mm narrower, with steeper penumbrae and more variable out-of-field dose, than predicted by the treatment planning system. The Monte Carlo simulations allowed the accuracy of the PAGAT gel dosimeter to be verified in this case, allowing PAGAT gel to be utilised in the measurement of dose from stereotactic and other radiotherapy treatments, with greater confidence in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DeLone and McLean (1992, p. 16) argue that the concept of “system use” has suffered from a “too simplistic definition.” Despite decades of substantial research on system use, the concept is yet to receive strong theoretical scrutiny. Many measures of system use and the development of measures have been often idiosyncratic and lack credibility or comparability. This paper reviews various attempts at conceptualization and measurement of system use and then proposes a re-conceptualization of it as “the level of incorporation of an information system within a user’s processes.” The definition is supported with the theory of work systems, system, and Key-User-Group considerations. We then go on to develop the concept of a Functional- Interface-Point (FIP) and four dimensions of system usage: extent, the proportion of the FIPs used by the business process; frequency, the rate at which FIPs are used by the participants in the process; thoroughness, the level of use of information/functionality provided by the system at an FIP; and attitude towards use, a set of measures that assess the level of comfort, degree of respect and the challenges set forth by the system. The paper argues that the automation level, the proportion of the business process encoded by the information system has a mediating impact on system use. The article concludes with a discussion of some implications of this re-conceptualization and areas for follow on research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Identity is unique, multiple and dynamic. This paper explores common attributes of organisational identities, and examines the role of performance management systems (PMSs) on revealing identity attributes. One of the influential PMSs, the balanced scorecard, is used to illustrate the arguments. A case study of a public-sector organisation suggests that PMSs now place a value on the intangible aspects of organisational life as well as the financial, periodically revealing distinctiveness, relativity, visibility, fluidity and manageability of public-sector identities that sustain their viability. This paper contributes to a multi-disciplinary approach and its practical application, demonstrating an alternative pathway to identity-making using PMSs.