972 resultados para parameter measurement


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Increasing global competitiveness worldwide has forced manufacturing organizations to produce high-quality products more quickly and at a competitive cost. In order to reach these goals, they need good quality components from suppliers at optimum price and lead time. This actually forced all the companies to adapt different improvement practices such as lean manufacturing, Just in Time (JIT) and effective supply chain management. Applying new improvement techniques and tools cause higher establishment costs and more Information Delay (ID). On the contrary, these new techniques may reduce the risk of stock outs and affect supply chain flexibility to give a better overall performance. But industry people are unable to measure the overall affects of those improvement techniques with a standard evaluation model .So an effective overall supply chain performance evaluation model is essential for suppliers as well as manufacturers to assess their companies under different supply chain strategies. However, literature on lean supply chain performance evaluation is comparatively limited. Moreover, most of the models assumed random values for performance variables. The purpose of this paper is to propose an effective supply chain performance evaluation model using triangular linguistic fuzzy numbers and to recommend optimum ranges for performance variables for lean implementation. The model initially considers all the supply chain performance criteria (input, output and flexibility), converts the values to triangular linguistic fuzzy numbers and evaluates overall supply chain performance under different situations. Results show that with the proposed performance measurement model, improvement area for each variable can be accurately identified.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Vehicle emitted particles are of significant concern based on their potential to influence local air quality and human health. Transport microenvironments usually contain higher vehicle emission concentrations compared to other environments, and people spend a substantial amount of time in these microenvironments when commuting. Currently there is limited scientific knowledge on particle concentration, passenger exposure and the distribution of vehicle emissions in transport microenvironments, partially due to the fact that the instrumentation required to conduct such measurements is not available in many research centres. Information on passenger waiting time and location in such microenvironments has also not been investigated, which makes it difficult to evaluate a passenger’s spatial-temporal exposure to vehicle emissions. Furthermore, current emission models are incapable of rapidly predicting emission distribution, given the complexity of variations in emission rates that result from changes in driving conditions, as well as the time spent in driving condition within the transport microenvironment. In order to address these scientific gaps in knowledge, this work conducted, for the first time, a comprehensive statistical analysis of experimental data, along with multi-parameter assessment, exposure evaluation and comparison, and emission model development and application, in relation to traffic interrupted transport microenvironments. The work aimed to quantify and characterise particle emissions and human exposure in the transport microenvironments, with bus stations and a pedestrian crossing identified as suitable research locations representing a typical transport microenvironment. Firstly, two bus stations in Brisbane, Australia, with different designs, were selected to conduct measurements of particle number size distributions, particle number and PM2.5 concentrations during two different seasons. Simultaneous traffic and meteorological parameters were also monitored, aiming to quantify particle characteristics and investigate the impact of bus flow rate, station design and meteorological conditions on particle characteristics at stations. The results showed higher concentrations of PN20-30 at the station situated in an open area (open station), which is likely to be attributed to the lower average daily temperature compared to the station with a canyon structure (canyon station). During precipitation events, it was found that particle number concentration in the size range 25-250 nm decreased greatly, and that the average daily reduction in PM2.5 concentration on rainy days compared to fine days was 44.2 % and 22.6 % at the open and canyon station, respectively. The effect of ambient wind speeds on particle number concentrations was also examined, and no relationship was found between particle number concentration and wind speed for the entire measurement period. In addition, 33 pairs of average half-hourly PN7-3000 concentrations were calculated and identified at the two stations, during the same time of a day, and with the same ambient wind speeds and precipitation conditions. The results of a paired t-test showed that the average half-hourly PN7-3000 concentrations at the two stations were not significantly different at the 5% confidence level (t = 0.06, p = 0.96), which indicates that the different station designs were not a crucial factor for influencing PN7-3000 concentrations. A further assessment of passenger exposure to bus emissions on a platform was evaluated at another bus station in Brisbane, Australia. The sampling was conducted over seven weekdays to investigate spatial-temporal variations in size-fractionated particle number and PM2.5 concentrations, as well as human exposure on the platform. For the whole day, the average PN13-800 concentration was 1.3 x 104 and 1.0 x 104 particle/cm3 at the centre and end of the platform, respectively, of which PN50-100 accounted for the largest proportion to the total count. Furthermore, the contribution of exposure at the bus station to the overall daily exposure was assessed using two assumed scenarios of a school student and an office worker. It was found that, although the daily time fraction (the percentage of time spend at a location in a whole day) at the station was only 0.8 %, the daily exposure fractions (the percentage of exposures at a location accounting for the daily exposure) at the station were 2.7% and 2.8 % for exposure to PN13-800 and 2.7% and 3.5% for exposure to PM2.5 for the school student and the office worker, respectively. A new parameter, “exposure intensity” (the ratio of daily exposure fraction and the daily time fraction) was also defined and calculated at the station, with values of 3.3 and 3.4 for exposure to PN13-880, and 3.3 and 4.2 for exposure to PM2.5, for the school student and the office worker, respectively. In order to quantify the enhanced emissions at critical locations and define the emission distribution in further dispersion models for traffic interrupted transport microenvironments, a composite line source emission (CLSE) model was developed to specifically quantify exposure levels and describe the spatial variability of vehicle emissions in traffic interrupted microenvironments. This model took into account the complexity of vehicle movements in the queue, as well as different emission rates relevant to various driving conditions (cruise, decelerate, idle and accelerate), and it utilised multi-representative segments to capture the accurate emission distribution for real vehicle flow. This model does not only helped to quantify the enhanced emissions at critical locations, but it also helped to define the emission source distribution of the disrupted steady flow for further dispersion modelling. The model then was applied to estimate particle number emissions at a bidirectional bus station used by diesel and compressed natural gas fuelled buses. It was found that the acceleration distance was of critical importance when estimating particle number emission, since the highest emissions occurred in sections where most of the buses were accelerating and no significant increases were observed at locations where they idled. It was also shown that emissions at the front end of the platform were 43 times greater than at the rear of the platform. The CLSE model was also applied at a signalled pedestrian crossing, in order to assess increased particle number emissions from motor vehicles when forced to stop and accelerate from rest. The CLSE model was used to calculate the total emissions produced by a specific number and mix of light petrol cars and diesel passenger buses including 1 car travelling in 1 direction (/1 direction), 14 cars / 1 direction, 1 bus / 1 direction, 28 cars / 2 directions, 24 cars and 2 buses / 2 directions, and 20 cars and 4 buses / 2 directions. It was found that the total emissions produced during stopping on a red signal were significantly higher than when the traffic moved at a steady speed. Overall, total emissions due to the interruption of the traffic increased by a factor of 13, 11, 45, 11, 41, and 43 for the above 6 cases, respectively. In summary, this PhD thesis presents the results of a comprehensive study on particle number and mass concentration, together with particle size distribution, in a bus station transport microenvironment, influenced by bus flow rates, meteorological conditions and station design. Passenger spatial-temporal exposure to bus emitted particles was also assessed according to waiting time and location along the platform, as well as the contribution of exposure at the bus station to overall daily exposure. Due to the complexity of the interrupted traffic flow within the transport microenvironments, a unique CLSE model was also developed, which is capable of quantifying emission levels at critical locations within the transport microenvironment, for the purpose of evaluating passenger exposure and conducting simulations of vehicle emission dispersion. The application of the CLSE model at a pedestrian crossing also proved its applicability and simplicity for use in a real-world transport microenvironment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of seven published/submitted papers, of which one has been published, three accepted for publication and the other three are under review. This project is financially supported by an Australian Research Council (ARC) Discovery Grant with the aim of proposing strategies for the performance control of Distributed Generation (DG) system with digital estimation of power system signal parameters. Distributed Generation (DG) has been recently introduced as a new concept for the generation of power and the enhancement of conventionally produced electricity. Global warming issue calls for renewable energy resources in electricity production. Distributed generation based on solar energy (photovoltaic and solar thermal), wind, biomass, mini-hydro along with use of fuel cell and micro turbine will gain substantial momentum in the near future. Technically, DG can be a viable solution for the issue of the integration of renewable or non-conventional energy resources. Basically, DG sources can be connected to local power system through power electronic devices, i.e. inverters or ac-ac converters. The interconnection of DG systems to power system as a compensator or a power source with high quality performance is the main aim of this study. Source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, distortion at the point of common coupling in weak source cases, source current power factor, and synchronism of generated currents or voltages are the issues of concern. The interconnection of DG sources shall be carried out by using power electronics switching devices that inject high frequency components rather than the desired current. Also, noise and harmonic distortions can impact the performance of the control strategies. To be able to mitigate the negative effect of high frequency and harmonic as well as noise distortion to achieve satisfactory performance of DG systems, new methods of signal parameter estimation have been proposed in this thesis. These methods are based on processing the digital samples of power system signals. Thus, proposing advanced techniques for the digital estimation of signal parameters and methods for the generation of DG reference currents using the estimates provided is the targeted scope of this thesis. An introduction to this research – including a description of the research problem, the literature review and an account of the research progress linking the research papers – is presented in Chapter 1. One of the main parameters of a power system signal is its frequency. Phasor Measurement (PM) technique is one of the renowned and advanced techniques used for the estimation of power system frequency. Chapter 2 focuses on an in-depth analysis conducted on the PM technique to reveal its strengths and drawbacks. The analysis will be followed by a new technique proposed to enhance the speed of the PM technique while the input signal is free of even-order harmonics. The other techniques proposed in this thesis as the novel ones will be compared with the PM technique comprehensively studied in Chapter 2. An algorithm based on the concept of Kalman filtering is proposed in Chapter 3. The algorithm is intended to estimate signal parameters like amplitude, frequency and phase angle in the online mode. The Kalman filter is modified to operate on the output signal of a Finite Impulse Response (FIR) filter designed by a plain summation. The frequency estimation unit is independent from the Kalman filter and uses the samples refined by the FIR filter. The frequency estimated is given to the Kalman filter to be used in building the transition matrices. The initial settings for the modified Kalman filter are obtained through a trial and error exercise. Another algorithm again based on the concept of Kalman filtering is proposed in Chapter 4 for the estimation of signal parameters. The Kalman filter is also modified to operate on the output signal of the same FIR filter explained above. Nevertheless, the frequency estimation unit, unlike the one proposed in Chapter 3, is not segregated and it interacts with the Kalman filter. The frequency estimated is given to the Kalman filter and other parameters such as the amplitudes and phase angles estimated by the Kalman filter is taken to the frequency estimation unit. Chapter 5 proposes another algorithm based on the concept of Kalman filtering. This time, the state parameters are obtained through matrix arrangements where the noise level is reduced on the sample vector. The purified state vector is used to obtain a new measurement vector for a basic Kalman filter applied. The Kalman filter used has similar structure to a basic Kalman filter except the initial settings are computed through an extensive math-work with regards to the matrix arrangement utilized. Chapter 6 proposes another algorithm based on the concept of Kalman filtering similar to that of Chapter 3. However, this time the initial settings required for the better performance of the modified Kalman filter are calculated instead of being guessed by trial and error exercises. The simulations results for the parameters of signal estimated are enhanced due to the correct settings applied. Moreover, an enhanced Least Error Square (LES) technique is proposed to take on the estimation when a critical transient is detected in the input signal. In fact, some large, sudden changes in the parameters of the signal at these critical transients are not very well tracked by Kalman filtering. However, the proposed LES technique is found to be much faster in tracking these changes. Therefore, an appropriate combination of the LES and modified Kalman filtering is proposed in Chapter 6. Also, this time the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 7 proposes the other algorithm based on the concept of Kalman filtering similar to those of Chapter 3 and 6. However, this time an optimal digital filter is designed instead of the simple summation FIR filter. New initial settings for the modified Kalman filter are calculated based on the coefficients of the digital filter applied. Also, the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 8 uses the estimation algorithm proposed in Chapter 7 for the interconnection scheme of a DG to power network. Robust estimates of the signal amplitudes and phase angles obtained by the estimation approach are used in the reference generation of the compensation scheme. Several simulation tests provided in this chapter show that the proposed scheme can very well handle the source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, and synchronism of generated currents or voltages. The purposed compensation scheme also prevents distortion in voltage at the point of common coupling in weak source cases, balances the source currents, and makes the supply side power factor a desired value.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Inter-Vehicular Communications (IVC) are considered a promising technological approach for enhancing transportation safety and improving highway efficiency. Previous theoretical work has demonstrated the benefits of IVC in vehicles strings. Simulations of partially IVC-equipped vehicles strings showed that only a small equipment ratio is sufficient to drastically reduce the number of head on collisions. However, these results are based on the assumptions that IVC exhibit lossless and instantaneous messages transmission. This paper presents the research design of an empirical measurement of a vehicles string, with the goal of highlighting the constraints introduced by the actual characteristics of communication devices. A warning message diffusion system based on IEEE 802.11 wireless technology was developed for an emergency breaking scenario. Preliminary results are presented as well, showing the latencies introduced by using 802.11a and discussing early findings and experimental limitations

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective To determine the test-retest reliability of measurements of thickness, fascicle length (Lf) and pennation angle (θ) of the vastus lateralis (VL) and gastrocnemius medialis (GM) muscles in older adults. Participants Twenty-one healthy older adults (11 men and ten women; average age 68·1 ± 5·2 years) participated in this study. Methods Ultrasound images (probe frequency 10 MHz) of the VL at two sites (VL site 1 and 2) were obtained with participants seated with knee at 90º flexion. For GM measures, participants lay prone with ankle fixed at 15º dorsiflexion. Measures were taken on two separate occasions, 7 days apart (T1 and T2). Results The ICCs (95% CI) were: VL site 1 thickness = 0·96(0·90–0·98); VL site 2 thickness = 0·96(0·90–0·98), VL θ = 0·87(0·68–0·95), VL Lf = 0·80(0·50–0·92), GM thickness = 0·97(0·92–0·99), GM θ = 0·85(0·62–0·94) and GM Lf =0·90(0·75–0·96). The 95% ratio limits of agreement (LOAs) for all measures, calculated by multiplying the standard deviation of the ratio of the results between T1 and T2 by 1·96, ranged from 10·59 to 38·01%. Conclusion The ability of these tests to determine a real change in VL and GM muscle architecture is good on a group level but problematic on an individual level as the relatively large 95% ratio LOAs in the current study may encompass the changes in architecture observed in other training studies. Therefore, the current findings suggest that B-mode ultrasonography can be used with confidence by researchers when investigating changes in muscle architecture in groups of older adults, but its use is limited in showing changes in individuals over time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DeLone and McLean (1992, p. 16) argue that the concept of “system use” has suffered from a “too simplistic definition.” Despite decades of substantial research on system use, the concept is yet to receive strong theoretical scrutiny. Many measures of system use and the development of measures have been often idiosyncratic and lack credibility or comparability. This paper reviews various attempts at conceptualization and measurement of system use and then proposes a re-conceptualization of it as “the level of incorporation of an information system within a user’s processes.” The definition is supported with the theory of work systems, system, and Key-User-Group considerations. We then go on to develop the concept of a Functional- Interface-Point (FIP) and four dimensions of system usage: extent, the proportion of the FIPs used by the business process; frequency, the rate at which FIPs are used by the participants in the process; thoroughness, the level of use of information/functionality provided by the system at an FIP; and attitude towards use, a set of measures that assess the level of comfort, degree of respect and the challenges set forth by the system. The paper argues that the automation level, the proportion of the business process encoded by the information system has a mediating impact on system use. The article concludes with a discussion of some implications of this re-conceptualization and areas for follow on research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Identity is unique, multiple and dynamic. This paper explores common attributes of organisational identities, and examines the role of performance management systems (PMSs) on revealing identity attributes. One of the influential PMSs, the balanced scorecard, is used to illustrate the arguments. A case study of a public-sector organisation suggests that PMSs now place a value on the intangible aspects of organisational life as well as the financial, periodically revealing distinctiveness, relativity, visibility, fluidity and manageability of public-sector identities that sustain their viability. This paper contributes to a multi-disciplinary approach and its practical application, demonstrating an alternative pathway to identity-making using PMSs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since users have become the focus of product/service design in last decade, the term User eXperience (UX) has been frequently used in the field of Human-Computer-Interaction (HCI). Research on UX facilitates a better understanding of the various aspects of the user’s interaction with the product or service. Mobile video, as a new and promising service and research field, has attracted great attention. Due to the significance of UX in the success of mobile video (Jordan, 2002), many researchers have centered on this area, examining users’ expectations, motivations, requirements, and usage context. As a result, many influencing factors have been explored (Buchinger, Kriglstein, Brandt & Hlavacs, 2011; Buchinger, Kriglstein & Hlavacs, 2009). However, a general framework for specific mobile video service is lacking for structuring such a great number of factors. To measure user experience of multimedia services such as mobile video, quality of experience (QoE) has recently become a prominent concept. In contrast to the traditionally used concept quality of service (QoS), QoE not only involves objectively measuring the delivered service but also takes into account user’s needs and desires when using the service, emphasizing the user’s overall acceptability on the service. Many QoE metrics are able to estimate the user perceived quality or acceptability of mobile video, but may be not enough accurate for the overall UX prediction due to the complexity of UX. Only a few frameworks of QoE have addressed more aspects of UX for mobile multimedia applications but need be transformed into practical measures. The challenge of optimizing UX remains adaptations to the resource constrains (e.g., network conditions, mobile device capabilities, and heterogeneous usage contexts) as well as meeting complicated user requirements (e.g., usage purposes and personal preferences). In this chapter, we investigate the existing important UX frameworks, compare their similarities and discuss some important features that fit in the mobile video service. Based on the previous research, we propose a simple UX framework for mobile video application by mapping a variety of influencing factors of UX upon a typical mobile video delivery system. Each component and its factors are explored with comprehensive literature reviews. The proposed framework may benefit in user-centred design of mobile video through taking a complete consideration of UX influences and in improvement of mobile videoservice quality by adjusting the values of certain factors to produce a positive user experience. It may also facilitate relative research in the way of locating important issues to study, clarifying research scopes, and setting up proper study procedures. We then review a great deal of research on UX measurement, including QoE metrics and QoE frameworks of mobile multimedia. Finally, we discuss how to achieve an optimal quality of user experience by focusing on the issues of various aspects of UX of mobile video. In the conclusion, we suggest some open issues for future study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: The Cobb technique is the universally accepted method for measuring the severity of spinal deformities. Traditionally, Cobb angles have been measured using protractor and pencil on hardcopy radiographic films. The new generation of mobile phones make accurate angle measurement possible using an integrated accelerometer, providing a potentially useful clinical tool for assessing Cobb angles. The purpose of this study was to compare Cobb angle measurements performed using an Apple iPhone and traditional protractor in a series of twenty Adolescent Idiopathic Scoliosis patients. Methods: Seven observers measured major Cobb angles on twenty pre-operative postero-anterior radiographs of Adolescent Idiopathic Scoliosis patients with both a standard protractor and using an Apple iPhone. Five of the observers repeated the measurements at least a week after the original measurements. Results: The mean absolute difference between pairs of iPhone/protractor measurements was 2.1°, with a small (1°) bias toward lower Cobb angles with the iPhone. 95% confidence intervals for intra-observer variability were ±3.3° for the protractor and ±3.9° for the iPhone. 95% confidence intervals for inter-observer variability were ±8.3° for the iPhone and ±7.1° for the protractor. Both of these confidence intervals were within the range of previously published Cobb measurement studies. Conclusions: We conclude that the iPhone is an equivalent Cobb measurement tool to the manual protractor, and measurement times are about 15% less. The widespread availability of inclinometer-equipped mobile phones and the ability to store measurements in later versions of the angle measurement software may make these new technologies attractive for clinical measurement applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pipelines are important lifeline facilities spread over a large area and they generally encounter a range of seismic hazards and different soil conditions. The seismic response of a buried segmented pipe depends on various parameters such as the type of buried pipe material and joints, end restraint conditions, soil characteristics, burial depths, and earthquake ground motion, etc. This study highlights the effect of the variation of geotechnical properties of the surrounding soil on seismic response of a buried pipeline. The variations of the properties of the surrounding soil along the pipe are described by sampling them from predefined probability distribution. The soil-pipe interaction model is developed in OpenSEES. Nonlinear earthquake time-history analysis is performed to study the effect of soil parameters variability on the response of pipeline. Based on the results, it is found that uncertainty in soil parameters may result in significant response variability of the pipeline.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To assess the accuracy of intraocular pressure(IOP) measurements using rebound tonometry over disposable hydrogel (etafilcon A) and silicone hydrogel (senofilcon A) contact lenses (CLs) of different powers. Methods: The experimental group comprised 36 subjects (19 male, 17 female). IOP measurements were undertaken on the subject’s right eyes in random order using a rebound tonometer (ICare). The CLs had powers of +2.00D, −2.00D and−6.00D. Six measurements were taken over each contact lens and also before and after the CLs had been worn. Results: A good correlation was found between IOP measurements with and without CLs (all r≥0.80; p < 0.05). Bland Altman plots did not show any significant trend in the difference in IOP readings with and without CLs as a function of IOP value. A two-way ANOVA revealed a significant effect of material and power (p < 0.01) but no interaction. All the comparisons between the measurements without CLs and with hydrogel CLs were significant (p < 0.01). The comparisons with silicone hydrogel CLs were not significant. Conclusions: Rebound tonometry can be reliably performed over silicone hydrogel CLs. With hydrogel CLs, the measurements were lower than those without CLs. However, despite the fact that these differences were statistically significant, their clinical significance was minimal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports on a mathematics project conducted with six Torres Strait Islander schools and communities by the research team at the YuMi Deadly Centre at QUT. Data collected is from a small focus group of six teachers and two teacher aides. We investigated how measurement is taught and learned by students, their teachers and teacher aides in the community schools. A key focus of the project was that the teaching and learning of measurement be contextualised to the students’ culture, community and home languages. A significant finding from the project was that the teachers had differing levels of knowledge and understanding about how to contextualise measurement to support student learning. For example, an Indigenous teacher identified that mathematics and the environment are relational, that is, they are not discrete and in isolation from one another, rather they mesh together, thus affording the articulation and interchange among and between mathematics and Torres Strait Islander culture.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A wireless sensor network system must have the ability to tolerate harsh environmental conditions and reduce communication failures. In a typical outdoor situation, the presence of wind can introduce movement in the foliage. This motion of vegetation structures causes large and rapid signal fading in the communication link and must be accounted for when deploying a wireless sensor network system in such conditions. This thesis examines the fading characteristics experienced by wireless sensor nodes due to the effect of varying wind speed in a foliage obstructed transmission path. It presents extensive measurement campaigns at two locations with the approach of a typical wireless sensor networks configuration. The significance of this research lies in the varied approaches of its different experiments, involving a variety of vegetation types, scenarios and the use of different polarisations (vertical and horizontal). Non–line of sight (NLoS) scenario conditions investigate the wind effect based on different vegetation densities including that of the Acacia tree, Dogbane tree and tall grass. Whereas the line of sight (LoS) scenario investigates the effect of wind when the grass is swaying and affecting the ground-reflected component of the signal. Vegetation type and scenarios are envisaged to simulate real life working conditions of wireless sensor network systems in outdoor foliated environments. The results from the measurements are presented in statistical models involving first and second order statistics. We found that in most of the cases, the fading amplitude could be approximated by both Lognormal and Nakagami distribution, whose m parameter was found to depend on received power fluctuations. Lognormal distribution is known as the result of slow fading characteristics due to shadowing. This study concludes that fading caused by variations in received power due to wind in wireless sensor networks systems are found to be insignificant. There is no notable difference in Nakagami m values for low, calm, and windy wind speed categories. It is also shown in the second order analysis, the duration of the deep fades are very short, 0.1 second for 10 dB attenuation below RMS level for vertical polarization and 0.01 second for 10 dB attenuation below RMS level for horizontal polarization. Another key finding is that the received signal strength for horizontal polarisation demonstrates more than 3 dB better performances than the vertical polarisation for LoS and near LoS (thin vegetation) conditions and up to 10 dB better for denser vegetation conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Commonwealth Scientific and Industrial Research Organization (CSIRO) has recently conducted a technology demonstration of a novel fixed wireless broadband access system in rural Australia. The system is based on multi user multiple-input multiple-output orthogonal frequency division multiplexing (MU-MIMO-OFDM). It demonstrated an uplink of six simultaneous users with distances ranging from 10 m to 8.5 km from a central tower, achieving 20 bits s/Hz spectrum efficiency. This paper reports on the analysis of channel capacity and bit error probability simulation based on the measured MUMIMO-OFDM channels obtained during the demonstration, and their comparison with the results based on channels simulated by a novel geometric optics based channel model suitable for MU-MIMO OFDM in rural areas. Despite its simplicity, the model was found to predict channel capacity and bit error rate probability accurately for a typical MU-MIMO-OFDM deployment scenario.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Study Design. A sheep study designed to compare the accuracy of static radiographs, dynamic radiographs, and computed tomographic (CT) scans for the assessment of thoracolumbar facet joint fusion as determined by micro-CT scanning. Objective. To determine the accuracy and reliability of conventional imaging techniques in identifying the status of thoracolumbar (T13-L1) facet joint fusion in a sheep model. Summary of Background Data. Plain radiographs are commonly used to determine the integrity of surgical arthrodesis of the thoracolumbar spine. Many previous studies of fusion success have relied solely on postoperative assessment of plain radiographs, a technique lacking sensitivity for pseudarthrosis. CT may be a more reliable technique, but is less well characterized. Methods. Eleven adult sheep were randomized to either attempted arthrodesis using autogenous bone graft and internal fixation (n = 3) or intentional pseudarthrosis (IP) using oxidized cellulose and internal fixation (n = 8). After 6 months, facet joint fusion was assessed by independent observers, using (1) plain static radiography alone, (2) additional dynamic radiographs, and (3) additional reconstructed spiral CT imaging. These assessments were correlated with high-resolution micro-CT imaging to predict the utility of the conventional imaging techniques in the estimation of fusion success. Results. The capacity of plain radiography alone to correctly predict fusion or pseudarthrosis was 43% and was not improved using plain radiography and dynamic radiography with also a 43% accuracy. Adding assessment by reformatted CT imaging to the plain radiography techniques increased the capacity to predict fusion outcome to 86% correctly. The sensitivity, specificity, and accuracy of static radiography were 0.33, 0.55, and 0.43, respectively, those of dynamic radiography were 0.46, 0.40, and 0.43, respectively, and those of radiography plus CT were 0.88, 0.85, and 0.86, respectively. Conclusion. CT-based evaluation correlated most closely with high-resolution micro-CT imaging. Neither plain static nor dynamic radiographs were able to predict fusion outcome accurately. © 2012 Lippincott Williams & Wilkins.