858 resultados para ADAPTED ANALYTICAL MODEL


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Most of the analytical models devoted to determine the acoustic properties of a rigid perforated panel consider the acoustic impedance of a single hole and then use the porosity to determine the impedance for the whole panel. However, in the case of not homogeneous hole distribution or more complex configurations this approach is no longer valid. This work explores some of these limitations and proposes a finite element methodology that implements the linearized Navier Stokes equations in the frequency domain to analyse the acoustic performance under normal incidence of perforated panel absorbers. Some preliminary results for a homogenous perforated panel show that the sound absorption coefficient derived from the Maa analytical model does not match those from the simulations. These differences are mainly attributed to the finite geometry effect and to the spatial distribution of the perforations for the numerical case. In order to confirm these statements, the acoustic field in the vicinities of the perforations is analysed for a more complex configuration of perforated panel. Additionally, experimental studies are carried out in an impedance tube for the same configuration and then compared to previous methods. The proposed methodology is shown to be in better agreement with the laboratorial measurements than the analytical approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We compare the Q parameter obtained from the semi-analytical model with scalar and vector models for two realistic transmission systems. First a linear system with a compensated dispersion map and second a soliton transmission system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

IEEE 802.11 standard has achieved huge success in the past decade and is still under development to provide higher physical data rate and better quality of service (QoS). An important problem for the development and optimization of IEEE 802.11 networks is the modeling of the MAC layer channel access protocol. Although there are already many theoretic analysis for the 802.11 MAC protocol in the literature, most of the models focus on the saturated traffic and assume infinite buffer at the MAC layer. In this paper we develop a unified analytical model for IEEE 802.11 MAC protocol in ad hoc networks. The impacts of channel access parameters, traffic rate and buffer size at the MAC layer are modeled with the assistance of a generalized Markov chain and an M/G/1/K queue model. The performance of throughput, packet delivery delay and dropping probability can be achieved. Extensive simulations show the analytical model is highly accurate. From the analytical model it is shown that for practical buffer configuration (e.g. buffer size larger than one), we can maximize the total throughput and reduce the packet blocking probability (due to limited buffer size) and the average queuing delay to zero by effectively controlling the offered load. The average MAC layer service delay as well as its standard deviation, is also much lower than that in saturated conditions and has an upper bound. It is also observed that the optimal load is very close to the maximum achievable throughput regardless of the number of stations or buffer size. Moreover, the model is scalable for performance analysis of 802.11e in unsaturated conditions and 802.11 ad hoc networks with heterogenous traffic flows. © 2012 KSI.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Quality, production and technological innovation management rank among the most important matters of concern to modern manufacturing organisations. They can provide companies with the decisive means of gaining a competitive advantage, especially within industries where there is an increasing similarity in product design and manufacturing processes. The papers in this special issue of International Journal of Technology Management have all been selected as examples of how aspects of quality, production and technological innovation can help to improve competitive performance. Most are based on presentations made at the UK Operations Management Association's Sixth International Conference held at Aston University at which the theme was 'Getting Ahead Through Technology and People'. At the conference itself over 80 papers were presented by authors from 15 countries around the world. Among the many topics addressed within the conference theme, technological innovation, quality and production management emerged as attracting the greatest concern and interest of delegates, particularly those from industry. For any new initiative to be implemented successfully, it should be led from the top of the organization. Achieving the desired level of commitment from top management can, however, be a difficulty. In the first paper of this issue, Mackness investigates this question by explaining how systems thinking can help. In the systems approach, properties such as 'emergence', 'hierarchy', 'commnication' and 'control' are used to assist top managers in preparing for change. Mackness's paper is then complemented by Iijima and Hasegawa's contribution in which they investigate the development of Quality Information Management (QIM) in Japan. They present the idea of a Design Review and demonstrate how it can be used to trace and reduce quality-related losses. The next paper on the subject of quality is by Whittle and colleagues. It relates to total quality and the process of culture change within organisations. Using the findings of investigations carried out in a number of case study companies, they describe four generic models which have been identified as characterising methods of implementing total quality within existing organisation cultures. Boaden and Dale's paper also relates to the management of quality, but looks specifically at the construction industry where it has been found there is still some confusion over the role of Quality Assurance (QA) and Total Quality Management (TQM). They describe the results of a questionnaire survey of forty companies in the industry and compare them to similar work carried out in other industries. Szakonyi's contribution then completes this group of papers which all relate specifically to the question of quality. His concern is with the two ways in which R&D or engineering managers can work on improving quality. The first is by improving it in the laboratory, while the second is by working with other functions to improve quality in the company. The next group of papers in this issue all address aspects of production management. Umeda's paper proposes a new manufacturing-oriented simulation package for production management which provides important information for both design and operation of manufacturing systems. A simulation for production strategy in a Computer Integrated Manufacturing (CIM) environment is also discussed. This paper is then followed by a contribution by Tanaka and colleagues in which they consider loading schedules for manufacturing orders in a Material Requirements Planning (MRP) environment. They compare mathematical programming with a knowledge-based approach, and comment on their relative effectiveness for different practical situations. Engstrom and Medbo's paper then looks at a particular aspect of production system design, namely the question of devising group working arrangements for assembly with new product structures. Using the case of a Swedish vehicle assembly plant where long cycle assembly work has been adopted, they advocate the use of a generally applicable product structure which can be adapted to suit individual local conditions. In the last paper of this particular group, Tay considers how automation has affected the production efficiency in Singapore. Using data from ten major industries he identifies several factors which are positively correlated with efficiency, with capital intensity being of greatest interest to policy makers. The two following papers examine the case of electronic data interchange (EDI) as a means of improving the efficiency and quality of trading relationships. Banerjee and Banerjee consider a particular approach to material provisioning for production systems using orderless inventory replenishment. Using the example of a single supplier and multiple buyers they develop an analytical model which is applicable for the exchange of information between trading partners using EDI. They conclude that EDI-based inventory control can be attractive from economic as well as other standpoints and that the approach is consistent with and can be instrumental in moving towards just-in-time (JIT) inventory management. Slacker's complementary viewpoint on EDI is from the perspective of the quality relation-ship between the customer and supplier. Based on the experience of Lucas, a supplier within the automotive industry, he concludes that both banks and trading companies must take responsibility for the development of payment mechanisms which satisfy the requirements of quality trading. The three final papers of this issue relate to technological innovation and are all country based. Berman and Khalil report on a survey of US technological effectiveness in the global economy. The importance of education is supported in their conclusions, although it remains unclear to what extent the US government can play a wider role in promoting technological innovation and new industries. The role of technology in national development is taken up by Martinsons and Valdemars who examine the case of the former Soviet Union. The failure to successfully infuse technology into Soviet enterprises is seen as a factor in that country's demise, and it is anticipated that the newly liberalised economies will be able to encourage greater technological creativity. This point is then taken up in Perminov's concluding paper which looks in detail at Russia. Here a similar analysis is made of the concluding paper which looks in detail at Russia. Here a similar analysis is made of the Soviet Union's technological decline, but a development strategy is also presented within the context of the change from a centralised to a free market economy. The papers included in this special issue of the International Journal of Technology Management each represent a unique and particular contribution to their own specific area of concern. Together, however, they also argue or demonstrate the general improvements in competitive performance that can be achieved through the application of modern principles and practice to the management of quality, production and technological innovation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Long-haul high speed optical transmission systems are significantly distorted by the interplay between the electronic chromatic dispersion (CD) equalization and the local oscillator (LO) laser phase noise, which leads to an effect of equalization enhanced phase noise (EEPN). The EEPN degrades the performance of optical communication systems severely with the increment of fiber dispersion, LO laser linewidth, symbol rate, and modulation format. In this paper, we present an analytical model for evaluating the performance of bit-error-rate (BER) versus signal-to-noise ratio (SNR) in the n-level phase shift keying (n-PSK) coherent transmission system employing differential carrier phase estimation (CPE), where the influence of EEPN is considered. Theoretical results based on this model have been investigated for the differential quadrature phase shift keying (DQPSK), the differential 8-PSK (D8PSK), and the differential 16-PSK (D16PSK) coherent transmission systems. The influence of EEPN on the BER performance in term of the fiber dispersion, the LO phase noise, the symbol rate, and the modulation format are analyzed in detail. The BER behaviors based on this analytical model achieve a good agreement with previously reported BER floors influenced by EEPN. Further simulations have also been carried out in the differential CPE considering EEPN. The results indicate that this analytical model can give an accurate prediction for the DQPSK system, and a leading-order approximation for the D8PSK and the D16PSK systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Video streaming via Transmission Control Protocol (TCP) networks has become a popular and highly demanded service, but its quality assessment in both objective and subjective terms has not been properly addressed. In this paper, based on statistical analysis a full analytic model of a no-reference objective metric, namely pause intensity (PI), for video quality assessment is presented. The model characterizes the video playout buffer behavior in connection with the network performance (throughput) and the video playout rate. This allows for instant quality measurement and control without requiring a reference video. PI specifically addresses the need for assessing the quality issue in terms of the continuity in the playout of TCP streaming videos, which cannot be properly measured by other objective metrics such as peak signal-to-noise-ratio, structural similarity, and buffer underrun or pause frequency. The performance of the analytical model is rigidly verified by simulation results and subjective tests using a range of video clips. It is demonstrated that PI is closely correlated with viewers' opinion scores regardless of the vastly different composition of individual elements, such as pause duration and pause frequency which jointly constitute this new quality metric. It is also shown that the correlation performance of PI is consistent and content independent. © 2013 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This dissertation aimed to improve travel time estimation for the purpose of transportation planning by developing a travel time estimation method that incorporates the effects of signal timing plans, which were difficult to consider in planning models. For this purpose, an analytical model has been developed. The model parameters were calibrated based on data from CORSIM microscopic simulation, with signal timing plans optimized using the TRANSYT-7F software. Independent variables in the model are link length, free-flow speed, and traffic volumes from the competing turning movements. The developed model has three advantages compared to traditional link-based or node-based models. First, the model considers the influence of signal timing plans for a variety of traffic volume combinations without requiring signal timing information as input. Second, the model describes the non-uniform spatial distribution of delay along a link, this being able to estimate the impacts of queues at different upstream locations of an intersection and attribute delays to a subject link and upstream link. Third, the model shows promise of improving the accuracy of travel time prediction. The mean absolute percentage error (MAPE) of the model is 13% for a set of field data from Minnesota Department of Transportation (MDOT); this is close to the MAPE of uniform delay in the HCM 2000 method (11%). The HCM is the industrial accepted analytical model in the existing literature, but it requires signal timing information as input for calculating delays. The developed model also outperforms the HCM 2000 method for a set of Miami-Dade County data that represent congested traffic conditions, with a MAPE of 29%, compared to 31% of the HCM 2000 method. The advantages of the proposed model make it feasible for application to a large network without the burden of signal timing input, while improving the accuracy of travel time estimation. An assignment model with the developed travel time estimation method has been implemented in a South Florida planning model, which improved assignment results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This research investigates a new structural system utilising modular construction. Five-sided boxes are cast on-site and stacked together to form a building. An analytical model was created of a typical building in each of two different analysis programs utilising the finite element method (Robot Millennium and ETABS). The pros and cons of both Robot Millennium and ETABS are listed at several key stages in the development of an analytical model utilising this structural system. Robot Millennium was initially utilised but created an analytical model too large to be successfully run. The computation requirements were too large for conventional computers. Therefore Robot Millennium was abandoned in favour of ETABS, whose more simplistic algorithms and assumptions permitted running this large computation model. Tips are provided as well as pitfalls signalled throughout the process of modelling such complex buildings of this type. ^ The building under high seismic loading required a new horizontal shear mechanism. This dissertation has proposed to create a secondary floor that ties to the modular box through the use of gunwales, and roughened surfaces with epoxy coatings. In addition, vertical connections necessitated a new type of shear wall. These shear walls consisted of waffled external walls tied through both reinforcement and a secondary concrete pour. ^ This structural system has generated a new building which was found to be very rigid compared to a conventional structure. The proposed modular building exhibited a period of 1.27 seconds, which is about one-fifth of a conventional building. The maximum lateral drift occurs under seismic loading with a magnitude of 6.14 inches which is one-quarter of a conventional building's drift. The deflected shape and pattern of the interstorey drifts are consistent with those of a coupled shear wall building. In conclusion, the computer analysis indicate that this new structure exceeds current code requirements for both hurricane winds and high seismic loads, and concomitantly provides a shortened construction time with reduced funding. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper proposes extended nonlinear analytical models, third-order models, of compliant parallelogram mechanisms. These models are capable of capturing the accurate effects from the very large axial force within the transverse motion range of 10% of the beam length through incorporating the terms associated with the high-order (up to third-order) axial force. Firstly, the free-body diagram method is employed to derive the nonlinear analytical model for a basic compliant parallelogram mechanism based on load-displacement relations of a single beam, geometry compatibility conditions, and load-equilibrium conditions. The procedures for the forward solutions and inverse solutions are described. Nonlinear analytical models for guided compliant multi-beam parallelogram mechanisms are then obtained. A case study of the compound compliant parallelogram mechanism, composed of two basic compliant parallelogram mechanisms in symmetry, is further implemented. This work intends to estimate the internal axial force change, the transverse force change, and the transverse stiffness change with the transverse motion using the proposed third-order model in comparison with the first-order model proposed in the prior art. In addition, FEA (finite element analysis) results validate the accuracy of the third-order model for a typical example. It is shown that in the case study the slenderness ratio affects the result discrepancy between the third-order model and the first-order model significantly, and the third-order model can illustrate a non-monotonic transverse stiffness curve if the beam is thin enough.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Engineering assets are often complex systems. In a complex system, components often have failure interactions which lead to interactive failures. A system with interactive failures may lead to an increased failure probability. Hence, one may have to take the interactive failures into account when designing and maintaining complex engineering systems. To address this issue, Sun et al have developed an analytical model for the interactive failures. In this model, the degree of interaction between two components is represented by interactive coefficients. To use this model for failure analysis, the related interactive coefficients must be estimated. However, methods for estimating the interactive coefficients have not been reported. To fill this gap, this paper presents five methods to estimate the interactive coefficients including probabilistic method; failure data based analysis method; laboratory experimental method; failure interaction mechanism based method; and expert estimation method. Examples are given to demonstrate the applications of the proposed methods. Comparisons among these methods are also presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents the feasibility of using structural modal strain energy as a parameter employed in correlation- based damage detection method for truss bridge structures. It is an extension of the damage detection method adopting multiple damage location assurance criterion. In this paper, the sensitivity of modal strain energy to damage obtained from the analytical model is incorporated into the correlation objective function. Firstly, the sensitivity matrix of modal strain energy to damage is conducted offline, and for an arbitrary damage case, the correlation coefficient (objective function) is calculated by multiplying the sensitivity matrix and damage vector. Then, a genetic algorithm is used to iteratively search the damage vector maximising the correlation between the corresponding modal strain energy change (hypothesised) and its counterpart in measurement. The proposed method is simulated and compared with the conventional methods, e.g. frequency-error method, coordinate modal assurance criterion and multiple damage location assurance criterion using mode shapes on a numerical truss bridge structure. The result demonstrates the modal strain energy correlation method is able to yield acceptable damage detection outcomes with less computing efforts, even in a noise contaminated condition.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Wireless network technologies, such as IEEE 802.11 based wireless local area networks (WLANs), have been adopted in wireless networked control systems (WNCS) for real-time applications. Distributed real-time control requires satisfaction of (soft) real-time performance from the underlying networks for delivery of real-time traffic. However, IEEE 802.11 networks are not designed for WNCS applications. They neither inherently provide quality-of-service (QoS) support, nor explicitly consider the characteristics of the real-time traffic on networked control systems (NCS), i.e., periodic round-trip traffic. Therefore, the adoption of 802.11 networks in real-time WNCSs causes challenging problems for network design and performance analysis. Theoretical methodologies are yet to be developed for computing the best achievable WNCS network performance under the constraints of real-time control requirements. Focusing on IEEE 802.11 distributed coordination function (DCF) based WNCSs, this paper analyses several important NCS network performance indices, such as throughput capacity, round trip time and packet loss ratio under the periodic round trip traffic pattern, a unique feature of typical NCSs. Considering periodic round trip traffic, an analytical model based on Markov chain theory is developed for deriving these performance indices under a critical real-time traffic condition, at which the real-time performance constraints are marginally satisfied. Case studies are also carried out to validate the theoretical development.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Impedance cardiography is an application of bioimpedance analysis primarily used in a research setting to determine cardiac output. It is a non invasive technique that measures the change in the impedance of the thorax which is attributed to the ejection of a volume of blood from the heart. The cardiac output is calculated from the measured impedance using the parallel conductor theory and a constant value for the resistivity of blood. However, the resistivity of blood has been shown to be velocity dependent due to changes in the orientation of red blood cells induced by changing shear forces during flow. The overall goal of this thesis was to study the effect that flow deviations have on the electrical impedance of blood, both experimentally and theoretically, and to apply the results to a clinical setting. The resistivity of stationary blood is isotropic as the red blood cells are randomly orientated due to Brownian motion. In the case of blood flowing through rigid tubes, the resistivity is anisotropic due to the biconcave discoidal shape and orientation of the cells. The generation of shear forces across the width of the tube during flow causes the cells to align with the minimal cross sectional area facing the direction of flow. This is in order to minimise the shear stress experienced by the cells. This in turn results in a larger cross sectional area of plasma and a reduction in the resistivity of the blood as the flow increases. Understanding the contribution of this effect on the thoracic impedance change is a vital step in achieving clinical acceptance of impedance cardiography. Published literature investigates the resistivity variations for constant blood flow. In this case, the shear forces are constant and the impedance remains constant during flow at a magnitude which is less than that for stationary blood. The research presented in this thesis, however, investigates the variations in resistivity of blood during pulsataile flow through rigid tubes and the relationship between impedance, velocity and acceleration. Using rigid tubes isolates the impedance change to variations associated with changes in cell orientation only. The implications of red blood cell orientation changes for clinical impedance cardiography were also explored. This was achieved through measurement and analysis of the experimental impedance of pulsatile blood flowing through rigid tubes in a mock circulatory system. A novel theoretical model including cell orientation dynamics was developed for the impedance of pulsatile blood through rigid tubes. The impedance of flowing blood was theoretically calculated using analytical methods for flow through straight tubes and the numerical Lattice Boltzmann method for flow through complex geometries such as aortic valve stenosis. The result of the analytical theoretical model was compared to the experimental impedance measurements through rigid tubes. The impedance calculated for flow through a stenosis using the Lattice Boltzmann method provides results for comparison with impedance cardiography measurements collected as part of a pilot clinical trial to assess the suitability of using bioimpedance techniques to assess the presence of aortic stenosis. The experimental and theoretical impedance of blood was shown to inversely follow the blood velocity during pulsatile flow with a correlation of -0.72 and -0.74 respectively. The results for both the experimental and theoretical investigations demonstrate that the acceleration of the blood is an important factor in determining the impedance, in addition to the velocity. During acceleration, the relationship between impedance and velocity is linear (r2 = 0.98, experimental and r2 = 0.94, theoretical). The relationship between the impedance and velocity during the deceleration phase is characterised by a time decay constant, ô , ranging from 10 to 50 s. The high level of agreement between the experimental and theoretically modelled impedance demonstrates the accuracy of the model developed here. An increase in the haematocrit of the blood resulted in an increase in the magnitude of the impedance change due to changes in the orientation of red blood cells. The time decay constant was shown to decrease linearly with the haematocrit for both experimental and theoretical results, although the slope of this decrease was larger in the experimental case. The radius of the tube influences the experimental and theoretical impedance given the same velocity of flow. However, when the velocity was divided by the radius of the tube (labelled the reduced average velocity) the impedance response was the same for two experimental tubes with equivalent reduced average velocity but with different radii. The temperature of the blood was also shown to affect the impedance with the impedance decreasing as the temperature increased. These results are the first published for the impedance of pulsatile blood. The experimental impedance change measured orthogonal to the direction of flow is in the opposite direction to that measured in the direction of flow. These results indicate that the impedance of blood flowing through rigid cylindrical tubes is axisymmetric along the radius. This has not previously been verified experimentally. Time frequency analysis of the experimental results demonstrated that the measured impedance contains the same frequency components occuring at the same time point in the cycle as the velocity signal contains. This suggests that the impedance contains many of the fluctuations of the velocity signal. Application of a theoretical steady flow model to pulsatile flow presented here has verified that the steady flow model is not adequate in calculating the impedance of pulsatile blood flow. The success of the new theoretical model over the steady flow model demonstrates that the velocity profile is important in determining the impedance of pulsatile blood. The clinical application of the impedance of blood flow through a stenosis was theoretically modelled using the Lattice Boltzman method (LBM) for fluid flow through complex geometeries. The impedance of blood exiting a narrow orifice was calculated for varying degrees of stenosis. Clincial impedance cardiography measurements were also recorded for both aortic valvular stenosis patients (n = 4) and control subjects (n = 4) with structurally normal hearts. This pilot trial was used to corroborate the results of the LBM. Results from both investigations showed that the decay time constant for impedance has potential in the assessment of aortic valve stenosis. In the theoretically modelled case (LBM results), the decay time constant increased with an increase in the degree of stenosis. The clinical results also showed a statistically significant difference in time decay constant between control and test subjects (P = 0.03). The time decay constant calculated for test subjects (ô = 180 - 250 s) is consistently larger than that determined for control subjects (ô = 50 - 130 s). This difference is thought to be due to difference in the orientation response of the cells as blood flows through the stenosis. Such a non-invasive technique using the time decay constant for screening of aortic stenosis provides additional information to that currently given by impedance cardiography techniques and improves the value of the device to practitioners. However, the results still need to be verified in a larger study. While impedance cardiography has not been widely adopted clinically, it is research such as this that will enable future acceptance of the method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The quality and bitrate modeling is essential to effectively adapt the bitrate and quality of videos when delivered to multiplatform devices over resource constraint heterogeneous networks. The recent model proposed by Wang et al. estimates the bitrate and quality of videos in terms of the frame rate and quantization parameter. However, to build an effective video adaptation framework, it is crucial to incorporate the spatial resolution in the analytical model for bitrate and perceptual quality adaptation. Hence, this paper proposes an analytical model to estimate the bitrate of videos in terms of quantization parameter, frame rate, and spatial resolution. The model can fit the measured data accurately which is evident from the high Pearson correlation. The proposed model is based on the observation that the relative reduction in bitrate due to decreasing spatial resolution is independent of the quantization parameter and frame rate. This modeling can be used for rate-constrained bit-stream adaptation scheme which selects the scalability parameters to optimize the perceptual quality for a given bandwidth constraint.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A trend in design and implementation of modern industrial automation systems is to integrate computing, communication and control into a unified framework at different levels of machine/factory operations and information processing. These distributed control systems are referred to as networked control systems (NCSs). They are composed of sensors, actuators, and controllers interconnected over communication networks. As most of communication networks are not designed for NCS applications, the communication requirements of NCSs may be not satisfied. For example, traditional control systems require the data to be accurate, timely and lossless. However, because of random transmission delays and packet losses, the control performance of a control system may be badly deteriorated, and the control system rendered unstable. The main challenge of NCS design is to both maintain and improve stable control performance of an NCS. To achieve this, communication and control methodologies have to be designed. In recent decades, Ethernet and 802.11 networks have been introduced in control networks and have even replaced traditional fieldbus productions in some real-time control applications, because of their high bandwidth and good interoperability. As Ethernet and 802.11 networks are not designed for distributed control applications, two aspects of NCS research need to be addressed to make these communication networks suitable for control systems in industrial environments. From the perspective of networking, communication protocols need to be designed to satisfy communication requirements for NCSs such as real-time communication and high-precision clock consistency requirements. From the perspective of control, methods to compensate for network-induced delays and packet losses are important for NCS design. To make Ethernet-based and 802.11 networks suitable for distributed control applications, this thesis develops a high-precision relative clock synchronisation protocol and an analytical model for analysing the real-time performance of 802.11 networks, and designs a new predictive compensation method. Firstly, a hybrid NCS simulation environment based on the NS-2 simulator is designed and implemented. Secondly, a high-precision relative clock synchronization protocol is designed and implemented. Thirdly, transmission delays in 802.11 networks for soft-real-time control applications are modeled by use of a Markov chain model in which real-time Quality-of- Service parameters are analysed under a periodic traffic pattern. By using a Markov chain model, we can accurately model the tradeoff between real-time performance and throughput performance. Furthermore, a cross-layer optimisation scheme, featuring application-layer flow rate adaptation, is designed to achieve the tradeoff between certain real-time and throughput performance characteristics in a typical NCS scenario with wireless local area network. Fourthly, as a co-design approach for both a network and a controller, a new predictive compensation method for variable delay and packet loss in NCSs is designed, where simultaneous end-to-end delays and packet losses during packet transmissions from sensors to actuators is tackled. The effectiveness of the proposed predictive compensation approach is demonstrated using our hybrid NCS simulation environment.