987 resultados para code source


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Work-related injuries in Australia are estimated to cost around $57.5 billion annually, however there are currently insufficient surveillance data available to support an evidence-based public health response. Emergency departments (ED) in Australia are a potential source of information on work-related injuries though most ED’s do not have an ‘Activity Code’ to identify work-related cases with information about the presenting problem recorded in a short free text field. This study compared methods for interrogating text fields for identifying work-related injuries presenting at emergency departments to inform approaches to surveillance of work-related injury.---------- Methods: Three approaches were used to interrogate an injury description text field to classify cases as work-related: keyword search, index search, and content analytic text mining. Sensitivity and specificity were examined by comparing cases flagged by each approach to cases coded with an Activity code during triage. Methods to improve the sensitivity and/or specificity of each approach were explored by adjusting the classification techniques within each broad approach.---------- Results: The basic keyword search detected 58% of cases (Specificity 0.99), an index search detected 62% of cases (Specificity 0.87), and the content analytic text mining (using adjusted probabilities) approach detected 77% of cases (Specificity 0.95).---------- Conclusions The findings of this study provide strong support for continued development of text searching methods to obtain information from routine emergency department data, to improve the capacity for comprehensive injury surveillance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the quest for shorter time-to-market, higher quality and reduced cost, model-driven software development has emerged as a promising approach to software engineering. The central idea is to promote models to first-class citizens in the development process. Starting from a set of very abstract models in the early stage of the development, they are refined into more concrete models and finally, as a last step, into code. As early phases of development focus on different concepts compared to later stages, various modelling languages are employed to most accurately capture the concepts and relations under discussion. In light of this refinement process, translating between modelling languages becomes a time-consuming and error-prone necessity. This is remedied by model transformations providing support for reusing and automating recurring translation efforts. These transformations typically can only be used to translate a source model into a target model, but not vice versa. This poses a problem if the target model is subject to change. In this case the models get out of sync and therefore do not constitute a coherent description of the software system anymore, leading to erroneous results in later stages. This is a serious threat to the promised benefits of quality, cost-saving, and time-to-market. Therefore, providing a means to restore synchronisation after changes to models is crucial if the model-driven vision is to be realised. This process of reflecting changes made to a target model back to the source model is commonly known as Round-Trip Engineering (RTE). While there are a number of approaches to this problem, they impose restrictions on the nature of the model transformation. Typically, in order for a transformation to be reversed, for every change to the target model there must be exactly one change to the source model. While this makes synchronisation relatively “easy”, it is ill-suited for many practically relevant transformations as they do not have this one-to-one character. To overcome these issues and to provide a more general approach to RTE, this thesis puts forward an approach in two stages. First, a formal understanding of model synchronisation on the basis of non-injective transformations (where a number of different source models can correspond to the same target model) is established. Second, detailed techniques are devised that allow the implementation of this understanding of synchronisation. A formal underpinning for these techniques is drawn from abductive logic reasoning, which allows the inference of explanations from an observation in the context of a background theory. As non-injective transformations are the subject of this research, there might be a number of changes to the source model that all equally reflect a certain target model change. To help guide the procedure in finding “good” source changes, model metrics and heuristics are investigated. Combining abductive reasoning with best-first search and a “suitable” heuristic enables efficient computation of a number of “good” source changes. With this procedure Round-Trip Engineering of non-injective transformations can be supported.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since the emergence of the destination branding literature in 1998, there have been few studies related to performance measurement of destination brand campaigns. There has also been little interest to date in researching the extent to which a destination brand represents the host community’s sense of place. Given that local residents represent a key stakeholder group for the destination marketing organisation (DMO), research is required to examine the extent to which marketing communications have been effective in enhancing engagement with the brand, and inducing a brand image that is congruent with the brand identity. Motivated by conceptual and practical aims, this paper reports the trial of a hierarchy of consumer-based brand equity (CBBE) for a destination, from the perspective of residents as active participants of local tourism. It is proposed that strong levels of CBBE among the host community representsa strong level of CBBE among the host community represents a source of comparative advantage for a destination, for which the DMO could proactively develop into a competitive advantage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This submission has been prepared on behalf of Australian consumer advocates by Nicola Howell, Faculty of Law, Queensland University of Technology (‘the researcher’), under a consultancy arrangement with the Australian Securities and Investments Commission (ASIC). The researcher has been engaged by ASIC to consult with consumer advocates across Australia in order to prepare a detailed consumer submission to the Review of the Code of Banking Practice and the Review Issues Paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a novel matched rotation precoding (MRP) scheme to design a rate one space-frequency block code (SFBC) and a multirate SFBC for MIMO-OFDM systems with limited feedback. The proposed rate one MRP and multirate MRP can always achieve full transmit diversity and optimal system performance for arbitrary number of antennas, subcarrier intervals, and subcarrier groupings, with limited channel knowledge required by the transmit antennas. The optimization process of the rate one MRP is simple and easily visualized so that the optimal rotation angle can be derived explicitly, or even intuitively for some cases. The multirate MRP has a complex optimization process, but it has a better spectral efficiency and provides a relatively smooth balance between system performance and transmission rate. Simulations show that the proposed SFBC with MRP can overcome the diversity loss for specific propagation scenarios, always improve the system performance, and demonstrate flexible performance with large performance gain. Therefore the proposed SFBCs with MRP demonstrate flexibility and feasibility so that it is more suitable for a practical MIMO-OFDM system with dynamic parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Various load compensation schemes proposed in literature assume that voltage source at point of common coupling (PCC) is stiff. In practice, however, the load is remote from a distribution substation and is supplied by a feeder. In the presence of feeder impedance, the PWM inverter switchings distort both the PCC voltage and the source currents. In this paper load compensation with such a non-stiff source is considered. A switching control of the voltage source inverter (VSI) based on state feedback is used for load compensation with non-stiff source. The design of the state feedback controller requires careful considerations in choosing a gain matrix and in the generation of reference quantities. These aspects are considered in this paper. Detailed simulation and experimental results are given to support the control design.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Acoustic emission (AE) technique is a popular tool used for structural health monitoring of civil, mechanical and aerospace structures. It is a non-destructive method based on rapid release of energy within a material by crack initiation or growth in the form of stress waves. Recording of these waves by means of sensors and subsequent analysis of the recorded signals convey information about the nature of the source. Ability to locate the source of stress waves is an important advantage of AE technique; but as AE waves travel in various modes and may undergo mode conversions, understanding of the modes (‘modal analysis’) is often necessary in order to determine source location accurately. This paper presents results of experiments aimed at finding locations of artificial AE sources on a thin plate and identifying wave modes in the recorded signal waveforms. Different source locating techniques will be investigated and importance of wave mode identification will be explored.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, the host-specificity and -sensitivity of human- and bovine-specific adenoviruses (HS-AVs and BS-AVs) were evaluated by testing wastewater/fecal samples from various animal species in Southeast, Queensland, Australia. The overall specificity and sensitivity of the HS-AVs marker were 1.0 and 0.78, respectively. These figures for the BS-AVs were 1.0 and 0.73, respectively. Twenty environmental water samples were colleted during wet conditions and 20 samples were colleted during dry conditions from the Maroochy Coastal River and tested for the presence of fecal indicator bacteria (FIB), host-specific viral markers, zoonotic bacterial and protozoan pathogens using PCR/qPCR. The concentrations of FIB in water samples collected after wet conditions were generally higher compared to dry conditions. HS-AVs was detected in 20% water samples colleted during wet conditions and whereas BS-AVs was detected in both wet (i.e., 10%) and dry (i.e., 10%) conditions. Both, C. jejuni mapA and Salmonella invA genes were detected in 10% and 10% of samples, respectively collected during dry conditions. The concentrations of Salmonella invA ranged between 3.5 × 102 to 4.3 × 102 genomic copies per 500 ml of water G. lamblia β-giardin gene was detected only in one sample (5%) collected during the dry conditions. Weak or significant correlations were observed between FIB with viral markers and zoonotic pathogens. However, during dry conditions, no significant correlations were observed between FIB concentrations with viral markers and zoonotic pathogens. The prevalence of HS-AVs in samples collected from the study river suggests that the quality of water is affected by human fecal pollution and as well as bovine fecal pollution. The results suggest that HS-AVs and BS-AVs detection using PCR could be a useful tool for the identification of human sourced fecal pollution in coastal waters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fractional Fokker-Planck equations (FFPEs) have gained much interest recently for describing transport dynamics in complex systems that are governed by anomalous diffusion and nonexponential relaxation patterns. However, effective numerical methods and analytic techniques for the FFPE are still in their embryonic state. In this paper, we consider a class of time-space fractional Fokker-Planck equations with a nonlinear source term (TSFFPE-NST), which involve the Caputo time fractional derivative (CTFD) of order α ∈ (0, 1) and the symmetric Riesz space fractional derivative (RSFD) of order μ ∈ (1, 2). Approximating the CTFD and RSFD using the L1-algorithm and shifted Grunwald method, respectively, a computationally effective numerical method is presented to solve the TSFFPE-NST. The stability and convergence of the proposed numerical method are investigated. Finally, numerical experiments are carried out to support the theoretical claims.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Type unions, pointer variables and function pointers are a long standing source of subtle security bugs in C program code. Their use can lead to hard-to-diagnose crashes or exploitable vulnerabilities that allow an attacker to attain privileged access over classified data. This paper describes an automatable framework for detecting such weaknesses in C programs statically, where possible, and for generating assertions that will detect them dynamically, in other cases. Exclusively based on analysis of the source code, it identifies required assertions using a type inference system supported by a custom made symbol table. In our preliminary findings, our type system was able to infer the correct type of unions in different scopes, without manual code annotations or rewriting. Whenever an evaluation is not possible or is difficult to resolve, appropriate runtime assertions are formed and inserted into the source code. The approach is demonstrated via a prototype C analysis tool.