220 resultados para Multi-scale hierarchical framework


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Campylobacter jejuni followed by Campylobacter coli contribute substantially to the economic and public health burden attributed to food-borne infections in Australia. Genotypic characterisation of isolates has provided new insights into the epidemiology and pathogenesis of C. jejuni and C. coli. However, currently available methods are not conducive to large scale epidemiological investigations that are necessary to elucidate the global epidemiology of these common food-borne pathogens. This research aims to develop high resolution C. jejuni and C. coli genotyping schemes that are convenient for high throughput applications. Real-time PCR and High Resolution Melt (HRM) analysis are fundamental to the genotyping schemes developed in this study and enable rapid, cost effective, interrogation of a range of different polymorphic sites within the Campylobacter genome. While the sources and routes of transmission of campylobacters are unclear, handling and consumption of poultry meat is frequently associated with human campylobacteriosis in Australia. Therefore, chicken derived C. jejuni and C. coli isolates were used to develop and verify the methods described in this study. The first aim of this study describes the application of MLST-SNP (Multi Locus Sequence Typing Single Nucleotide Polymorphisms) + binary typing to 87 chicken C. jejuni isolates using real-time PCR analysis. These typing schemes were developed previously by our research group using isolates from campylobacteriosis patients. This present study showed that SNP + binary typing alone or in combination are effective at detecting epidemiological linkage between chicken derived Campylobacter isolates and enable data comparisons with other MLST based investigations. SNP + binary types obtained from chicken isolates in this study were compared with a previously SNP + binary and MLST typed set of human isolates. Common genotypes between the two collections of isolates were identified and ST-524 represented a clone that could be worth monitoring in the chicken meat industry. In contrast, ST-48, mainly associated with bovine hosts, was abundant in the human isolates. This genotype was, however, absent in the chicken isolates, indicating the role of non-poultry sources in causing human Campylobacter infections. This demonstrates the potential application of SNP + binary typing for epidemiological investigations and source tracing. While MLST SNPs and binary genes comprise the more stable backbone of the Campylobacter genome and are indicative of long term epidemiological linkage of the isolates, the development of a High Resolution Melt (HRM) based curve analysis method to interrogate the hypervariable Campylobacter flagellin encoding gene (flaA) is described in Aim 2 of this study. The flaA gene product appears to be an important pathogenicity determinant of campylobacters and is therefore a popular target for genotyping, especially for short term epidemiological studies such as outbreak investigations. HRM curve analysis based flaA interrogation is a single-step closed-tube method that provides portable data that can be easily shared and accessed. Critical to the development of flaA HRM was the use of flaA specific primers that did not amplify the flaB gene. HRM curve analysis flaA interrogation was successful at discriminating the 47 sequence variants identified within the 87 C. jejuni and 15 C. coli isolates and correlated to the epidemiological background of the isolates. In the combinatorial format, the resolving power of flaA was additive to that of SNP + binary typing and CRISPR (Clustered regularly spaced short Palindromic repeats) HRM and fits the PHRANA (Progressive hierarchical resolving assays using nucleic acids) approach for genotyping. The use of statistical methods to analyse the HRM data enhanced sophistication of the method. Therefore, flaA HRM is a rapid and cost effective alternative to gel- or sequence-based flaA typing schemes. Aim 3 of this study describes the development of a novel bioinformatics driven method to interrogate Campylobacter MLST gene fragments using HRM, and is called ‘SNP Nucleated Minim MLST’ or ‘Minim typing’. The method involves HRM interrogation of MLST fragments that encompass highly informative “Nucleating SNPS” to ensure high resolution. Selection of fragments potentially suited to HRM analysis was conducted in silico using i) “Minimum SNPs” and ii) the new ’HRMtype’ software packages. Species specific sets of six “Nucleating SNPs” and six HRM fragments were identified for both C. jejuni and C. coli to ensure high typeability and resolution relevant to the MLST database. ‘Minim typing’ was tested empirically by typing 15 C. jejuni and five C. coli isolates. The association of clonal complexes (CC) to each isolate by ‘Minim typing’ and SNP + binary typing were used to compare the two MLST interrogation schemes. The CCs linked with each C. jejuni isolate were consistent for both methods. Thus, ‘Minim typing’ is an efficient and cost effective method to interrogate MLST genes. However, it is not expected to be independent, or meet the resolution of, sequence based MLST gene interrogation. ‘Minim typing’ in combination with flaA HRM is envisaged to comprise a highly resolving combinatorial typing scheme developed around the HRM platform and is amenable to automation and multiplexing. The genotyping techniques described in this thesis involve the combinatorial interrogation of differentially evolving genetic markers on the unified real-time PCR and HRM platform. They provide high resolution and are simple, cost effective and ideally suited to rapid and high throughput genotyping for these common food-borne pathogens.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over recent years, Unmanned Air Vehicles or UAVs have become a powerful tool for reconnaissance and surveillance tasks. These vehicles are now available in a broad size and capability range and are intended to fly in regions where the presence of onboard human pilots is either too risky or unnecessary. This paper describes the formulation and application of a design framework that supports the complex task of multidisciplinary design optimisation of UAVs systems via evolutionary computation. The framework includes a Graphical User Interface (GUI), a robust Evolutionary Algorithm optimiser named HAPEA, several design modules, mesh generators and post-processing capabilities in an integrated platform. These population –based algorithms such as EAs are good for cases problems where the search space can be multi-modal, non-convex or discontinuous, with multiple local minima and with noise, and also problems where we look for multiple solutions via Game Theory, namely a Nash equilibrium point or a Pareto set of non-dominated solutions. The application of the methodology is illustrated on conceptual and detailed multi-criteria and multidisciplinary shape design problems. Results indicate the practicality and robustness of the framework to find optimal shapes and trade—offs between the disciplinary analyses and to produce a set of non dominated solutions of an optimal Pareto front to the designer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intelligent surveillance systems typically use a single visual spectrum modality for their input. These systems work well in controlled conditions, but often fail when lighting is poor, or environmental effects such as shadows, dust or smoke are present. Thermal spectrum imagery is not as susceptible to environmental effects, however thermal imaging sensors are more sensitive to noise and they are only gray scale, making distinguishing between objects difficult. Several approaches to combining the visual and thermal modalities have been proposed, however they are limited by assuming that both modalities are perfuming equally well. When one modality fails, existing approaches are unable to detect the drop in performance and disregard the under performing modality. In this paper, a novel middle fusion approach for combining visual and thermal spectrum images for object tracking is proposed. Motion and object detection is performed on each modality and the object detection results for each modality are fused base on the current performance of each modality. Modality performance is determined by comparing the number of objects tracked by the system with the number detected by each mode, with a small allowance made for objects entering and exiting the scene. The tracking performance of the proposed fusion scheme is compared with performance of the visual and thermal modes individually, and a baseline middle fusion scheme. Improvement in tracking performance using the proposed fusion approach is demonstrated. The proposed approach is also shown to be able to detect the failure of an individual modality and disregard its results, ensuring performance is not degraded in such situations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation is primarily an applied statistical modelling investigation, motivated by a case study comprising real data and real questions. Theoretical questions on modelling and computation of normalization constants arose from pursuit of these data analytic questions. The essence of the thesis can be described as follows. Consider binary data observed on a two-dimensional lattice. A common problem with such data is the ambiguity of zeroes recorded. These may represent zero response given some threshold (presence) or that the threshold has not been triggered (absence). Suppose that the researcher wishes to estimate the effects of covariates on the binary responses, whilst taking into account underlying spatial variation, which is itself of some interest. This situation arises in many contexts and the dingo, cypress and toad case studies described in the motivation chapter are examples of this. Two main approaches to modelling and inference are investigated in this thesis. The first is frequentist and based on generalized linear models, with spatial variation modelled by using a block structure or by smoothing the residuals spatially. The EM algorithm can be used to obtain point estimates, coupled with bootstrapping or asymptotic MLE estimates for standard errors. The second approach is Bayesian and based on a three- or four-tier hierarchical model, comprising a logistic regression with covariates for the data layer, a binary Markov Random field (MRF) for the underlying spatial process, and suitable priors for parameters in these main models. The three-parameter autologistic model is a particular MRF of interest. Markov chain Monte Carlo (MCMC) methods comprising hybrid Metropolis/Gibbs samplers is suitable for computation in this situation. Model performance can be gauged by MCMC diagnostics. Model choice can be assessed by incorporating another tier in the modelling hierarchy. This requires evaluation of a normalization constant, a notoriously difficult problem. Difficulty with estimating the normalization constant for the MRF can be overcome by using a path integral approach, although this is a highly computationally intensive method. Different methods of estimating ratios of normalization constants (N Cs) are investigated, including importance sampling Monte Carlo (ISMC), dependent Monte Carlo based on MCMC simulations (MCMC), and reverse logistic regression (RLR). I develop an idea present though not fully developed in the literature, and propose the Integrated mean canonical statistic (IMCS) method for estimating log NC ratios for binary MRFs. The IMCS method falls within the framework of the newly identified path sampling methods of Gelman & Meng (1998) and outperforms ISMC, MCMC and RLR. It also does not rely on simplifying assumptions, such as ignoring spatio-temporal dependence in the process. A thorough investigation is made of the application of IMCS to the three-parameter Autologistic model. This work introduces background computations required for the full implementation of the four-tier model in Chapter 7. Two different extensions of the three-tier model to a four-tier version are investigated. The first extension incorporates temporal dependence in the underlying spatio-temporal process. The second extensions allows the successes and failures in the data layer to depend on time. The MCMC computational method is extended to incorporate the extra layer. A major contribution of the thesis is the development of a fully Bayesian approach to inference for these hierarchical models for the first time. Note: The author of this thesis has agreed to make it open access but invites people downloading the thesis to send her an email via the 'Contact Author' function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the use of servo-mechanisms as part of a tightly integrated homogeneous Wireless Multi- media Sensor Network (WMSN). We describe the design of our second generation WMSN node platform, which has increased image resolution, in-built audio sensors, PIR sensors, and servo- mechanisms. These devices have a wide disparity in their energy consumption and in the information quality they return. As a result, we propose a framework that establishes a hierarchy of devices (sensors and actuators) within the node and uses frequent sampling of cheaper devices to trigger the activation of more energy-hungry devices. Within this framework, we consider the suitability of servos for WMSNs by examining the functional characteristics and by measuring the energy consumption of 2 analog and 2 digital servos, in order to determine their impact on overall node energy cost. We also implement a simple version of our hierarchical sampling framework to evaluate the energy consumption of servos relative to other node components. The evaluation results show that: (1) the energy consumption of servos is small relative to audio/image signal processing energy cost in WMSN nodes; (2) digital servos do not necessarily consume as much energy as is currently believed; and (3) the energy cost per degree panning is lower for larger panning angles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the results of a structural equation model (SEM) that describes and quantifies the relationships between corporate culture and safety performance. The SEM is estimated using 196 individual questionnaire responses from three companies with better than average safety records. A multiattribute analysis of corporate safety culture characteristics resulted in a hierarchical description of corporate safety culture comprised of three major categories — people, process, and value. These three major categories were decomposed into 54 measurable questions and used to develop a questionnaire to quantify corporate safety culture. The SEM identified five latent variables that describe corporate safety culture: (1) a company’s safety commitment; (2) the safety incentives that are offered to field personal for safe performance; (3) the subcontractor involvement in the company culture; (4) the field safety accountability and dedication; and (5) the disincentives for unsafe behaviors. These characteristics of company safety culture serve as indicators for a company’s safety performance. Based on the findings from this limited sample of three companies, this paper proposes a list of practices that companies may consider to improve corporate safety culture and safety performance. A more comprehensive study based on a larger sample is recommended to corroborate the findings of this study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Open access reforms to railway regulations allow multiple train operators to provide rail services on a common infrastructure. As railway operations are now independently managed by different stakeholders, conflicts in operations may arise, and there have been attempts to derive an effective access charge regime so that these conflicts may be resolved. One approach is by direct negotiation between the infrastructure manager and the train service providers. Despite the substantial literature on the topic, few consider the benefits of employing computer simulation as an evaluation tool for railway operational activities such as access pricing. This article proposes a multi-agent system (MAS) framework for the railway open market and demonstrates its feasibility by modelling the negotiation between an infrastructure provider and a train service operator. Empirical results show that the model is capable of resolving operational conflicts according to market demand.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Urban sprawl combined with low density development causes unsustainable development patterns including accessibility and mobility problems, especially for those who do not have the capacity to own a vehicle or access to quality public transport services. Sustainable transportation development is crucial in order to solve transport disadvantage problems in urban settlements. People who are affected by these problems are referred to as ‘transportation disadvantaged’. Transportation disadvantage is a multi-dimensional problem that combines socio-economics, transportation and spatial characteristics or dimensions. However, a substantial number of transportation disadvantage studies so far only focus on the socio-economic and transportation dimensions, while the latter dimension of transportation disadvantage has been neglected. This chapter investigates the spatial dimension of transportation disadvantage by comparing the travel capabilities of residents and their accessibility levels with land use characteristics. The analysis of the study identifies significant land use characteristics with travel inability, and is useful for identifying the transportation disadvantaged population.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: In the global knowledge economy, investment in knowledge-intensive industries and information and communication technology (ICT) infrastructures are seen as significant factors in improving the overall socio-economic fabric of cities. Consequently knowledge-based urban development (KBUD) has become a new paradigm in urban planning and development, for increasing the welfare and competitiveness of cities and regions. The paper discusses the critical connections between KBUD strategies and knowledge-intensive industries and ICT infrastructures. In particular, it investigates the application of the knowledge-based urban development concept by discussing one of South East Asia’s large scale manifestations of KBUD; Malaysia’s Multimedia Super Corridor. ----- ----- Design/methodology/approach: The paper provides a review of the KBUD concept and develops a knowledge-based urban development assessment framework to provide a clearer understanding of development and evolution of KBUD manifestations. Subsequently the paper investigates the implementation of the KBUD concept within the Malaysian context, and particularly the Multimedia Super Corridor (MSC). ----- ----- Originality/value: The paper, with its KBUD assessment framework, scrutinises Malaysia’s experince; providing an overview of the MSC project and discussion of the case findings. The development and evolution of the MSC is viewed with regard to KBUD policy implementation, infrastructural implications, and the agencies involved in the development and management of the MSC. ----- ----- Practical implications: The emergence of the knowledge economy, together with the issues of globalisation and rapid urbanisation, have created an urgent need for urban planners to explore new ways of strategising planning and development that encompasses the needs and requirements of the knowledge economy and society. In light of the literature and MSC case findings, the paper provides generic recommendations, on the orchestration of knowledge-based urban development, for other cities and regions seeking to transform to the knowledge economy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hollow micro-sized H2(H2O)Nb2O6 spheres constructed by nanocrystallites have been successfully synthesized via a bubble-template assisted hydrothermal process. In the reaction process, H2O2 acts as a bubble generator and plays a key role in the formation of the hollow structure. An in situ bubble-template mechanism has been proposed for the possible formation of the hollow structure. The spherelike assemblies of these H2(H2O)Nb2O6 nanoparticles have been transformed into their corresponding pseudohexagonal phase Nb2O5 through a moderate annealing dehydration process without destroying the hierarchical structure. Optical properties of the as-prepared hollow spheres were investigated. It is exciting that the absorption edge of the hollow Nb2O5 microspheres shifts about 18 nm to the violet compared with bulk powders in the UV/vis spectra, indicating its superior optical properties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Plant biosecurity requires statistical tools to interpret field surveillance data in order to manage pest incursions that threaten crop production and trade. Ultimately, management decisions need to be based on the probability that an area is infested or free of a pest. Current informal approaches to delimiting pest extent rely upon expert ecological interpretation of presence / absence data over space and time. Hierarchical Bayesian models provide a cohesive statistical framework that can formally integrate the available information on both pest ecology and data. The overarching method involves constructing an observation model for the surveillance data, conditional on the hidden extent of the pest and uncertain detection sensitivity. The extent of the pest is then modelled as a dynamic invasion process that includes uncertainty in ecological parameters. Modelling approaches to assimilate this information are explored through case studies on spiralling whitefly, Aleurodicus dispersus and red banded mango caterpillar, Deanolis sublimbalis. Markov chain Monte Carlo simulation is used to estimate the probable extent of pests, given the observation and process model conditioned by surveillance data. Statistical methods, based on time-to-event models, are developed to apply hierarchical Bayesian models to early detection programs and to demonstrate area freedom from pests. The value of early detection surveillance programs is demonstrated through an application to interpret surveillance data for exotic plant pests with uncertain spread rates. The model suggests that typical early detection programs provide a moderate reduction in the probability of an area being infested but a dramatic reduction in the expected area of incursions at a given time. Estimates of spiralling whitefly extent are examined at local, district and state-wide scales. The local model estimates the rate of natural spread and the influence of host architecture, host suitability and inspector efficiency. These parameter estimates can support the development of robust surveillance programs. Hierarchical Bayesian models for the human-mediated spread of spiralling whitefly are developed for the colonisation of discrete cells connected by a modified gravity model. By estimating dispersal parameters, the model can be used to predict the extent of the pest over time. An extended model predicts the climate restricted distribution of the pest in Queensland. These novel human-mediated movement models are well suited to demonstrating area freedom at coarse spatio-temporal scales. At finer scales, and in the presence of ecological complexity, exploratory models are developed to investigate the capacity for surveillance information to estimate the extent of red banded mango caterpillar. It is apparent that excessive uncertainty about observation and ecological parameters can impose limits on inference at the scales required for effective management of response programs. The thesis contributes novel statistical approaches to estimating the extent of pests and develops applications to assist decision-making across a range of plant biosecurity surveillance activities. Hierarchical Bayesian modelling is demonstrated as both a useful analytical tool for estimating pest extent and a natural investigative paradigm for developing and focussing biosecurity programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the emergence of multi-core processors into the mainstream, parallel programming is no longer the specialized domain it once was. There is a growing need for systems to allow programmers to more easily reason about data dependencies and inherent parallelism in general purpose programs. Many of these programs are written in popular imperative programming languages like Java and C]. In this thesis I present a system for reasoning about side-effects of evaluation in an abstract and composable manner that is suitable for use by both programmers and automated tools such as compilers. The goal of developing such a system is to both facilitate the automatic exploitation of the inherent parallelism present in imperative programs and to allow programmers to reason about dependencies which may be limiting the parallelism available for exploitation in their applications. Previous work on languages and type systems for parallel computing has tended to focus on providing the programmer with tools to facilitate the manual parallelization of programs; programmers must decide when and where it is safe to employ parallelism without the assistance of the compiler or other automated tools. None of the existing systems combine abstraction and composition with parallelization and correctness checking to produce a framework which helps both programmers and automated tools to reason about inherent parallelism. In this work I present a system for abstractly reasoning about side-effects and data dependencies in modern, imperative, object-oriented languages using a type and effect system based on ideas from Ownership Types. I have developed sufficient conditions for the safe, automated detection and exploitation of a number task, data and loop parallelism patterns in terms of ownership relationships. To validate my work, I have applied my ideas to the C] version 3.0 language to produce a language extension called Zal. I have implemented a compiler for the Zal language as an extension of the GPC] research compiler as a proof of concept of my system. I have used it to parallelize a number of real-world applications to demonstrate the feasibility of my proposed approach. In addition to this empirical validation, I present an argument for the correctness of the type system and language semantics I have proposed as well as sketches of proofs for the correctness of the sufficient conditions for parallelization proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Proposed transmission smart grids will use a digital platform for the automation of substations operating at voltage levels of 110 kV and above. The IEC 61850 series of standards, released in parts over the last ten years, provide a specification for substation communications networks and systems. These standards, along with IEEE Std 1588-2008 Precision Time Protocol version 2 (PTPv2) for precision timing, are recommended by the both IEC Smart Grid Strategy Group and the NIST Framework and Roadmap for Smart Grid Interoperability Standards for substation automation. IEC 61850-8-1 and IEC 61850-9-2 provide an inter-operable solution to support multi-vendor digital process bus solutions, allowing for the removal of potentially lethal voltages and damaging currents from substation control rooms, a reduction in the amount of cabling required in substations, and facilitates the adoption of non-conventional instrument transformers (NCITs). IEC 61850, PTPv2 and Ethernet are three complementary protocol families that together define the future of sampled value digital process connections for smart substation automation. This paper describes a specific test and evaluation system that uses real time simulation, protection relays, PTPv2 time clocks and artificial network impairment that is being used to investigate technical impediments to the adoption of SV process bus systems by transmission utilities. Knowing the limits of a digital process bus, especially when sampled values and NCITs are included, will enable utilities to make informed decisions regarding the adoption of this technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the establishment of the first national strategic development plan in the early 1970s, the construction industry has played an important role in terms of the economic, social and cultural development of Indonesia. The industry’s contribution to Indonesia’s gross domestic product (GDP) increased from 3.9% in 1973 to 7.7% in 2007. Business Monitoring International (2009) forecasts that Indonesia is home to one of the fastest-growing construction industries in Asia despite the average construction growth rate being expected to remain under 10% over the period 2006 – 2010. Similarly, Howlett and Powell (2006) place Indonesia as one of the 20 largest construction markets in 2010. Although the prospects for the Indonesian construction industry are now very promising, many local construction firms still face serious difficulties, such as poor performance and low competitiveness. There are two main reasons behind this problem: the environment that they face is not favourable; the other is the lack of strategic direction to improve competitiveness and performance. Furthermore, although strategic management has now become more widely used by many large construction firms in developed countries, practical examples and empirical studies related to the Indonesian construction industry remain scarce. In addition, research endeavours related to these topics in developing countries appear to be limited. This has potentially become one of the factors hampering efforts to guide Indonesian construction enterprises. This research aims to construct a conceptual model to enable Indonesian construction enterprises to develop a sound long-term corporate strategy that generates competitive advantage and superior performance. The conceptual model seeks to address the main prescription of a dynamic capabilities framework (Teece, Pisano & Shuen, 1997; Teece, 2007) within the context of the Indonesian construction industry. It is hypothesised that in a rapidly changing and varied environment, competitive success arises from the continuous development and reconfiguration of firm’s specific assets achieving competitive advantage is not only dependent on the exploitation of specific assets/capabilities, but on the exploitation of all of the assets and capabilities combinations in the dynamic capabilities framework. Thus, the model is refined through sequential statistical regression analyses of survey results with a sample size of 120 valid responses. The results of this study provide empirical evidence in support of the notion that a competitive advantage is achieved via the implementation of a dynamic capability framework as an important way for a construction enterprise to improve its organisational performance. The characteristics of asset-capability combinations were found to be significant determinants of the competitive advantage of the Indonesian construction enterprises, and that such advantage sequentially contributes to organisational performance. If a dynamic capabilities framework can work in the context of Indonesia, it suggests that the framework has potential applicability in other emerging and developing countries. This study also demonstrates the importance of the multi-stage nature of the model which provides a rich understanding of the dynamic process by which asset-capability should be exploited in combination by the construction firms operating in varying levels of hostility. Such findings are believed to be useful to both academics and practitioners, however, as this research represents a dynamic capabilities framework at the enterprise level, future studies should continue to explore and examine the framework in other levels of strategic management in construction as well as in other countries where different cultures or similar conditions prevails.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the problem of building a software architecture for a human-robot team. The objective of the team is to build a multi-attribute map of the world by performing information fusion. A decentralized approach to information fusion is adopted to achieve the system properties of scalability and survivability. Decentralization imposes constraints on the design of the architecture and its implementation. We show how a Component-Based Software Engineering approach can address these constraints. The architecture is implemented using Orca – a component-based software framework for robotic systems. Experimental results from a deployed system comprised of an unmanned air vehicle, a ground vehicle, and two human operators are presented. A section on the lessons learned is included which may be applicable to other distributed systems with complex algorithms. We also compare Orca to the Player software framework in the context of distributed systems.