987 resultados para code source
Resumo:
A composite line source emission (CLSE) model was developed to specifically quantify exposure levels and describe the spatial variability of vehicle emissions in traffic interrupted microenvironments. This model took into account the complexity of vehicle movements in the queue, as well as different emission rates relevant to various driving conditions (cruise, decelerate, idle and accelerate), and it utilised multi-representative segments to capture the accurate emission distribution for real vehicle flow. Hence, this model was able to quickly quantify the time spent in each segment within the considered zone, as well as the composition and position of the requisite segments based on the vehicle fleet information, which not only helped to quantify the enhanced emissions at critical locations, but it also helped to define the emission source distribution of the disrupted steady flow for further dispersion modelling. The model then was applied to estimate particle number emissions at a bi-directional bus station used by diesel and compressed natural gas fuelled buses. It was found that the acceleration distance was of critical importance when estimating particle number emission, since the highest emissions occurred in sections where most of the buses were accelerating and no significant increases were observed at locations where they idled. It was also shown that emissions at the front end of the platform were 43 times greater than at the rear of the platform. Although the CLSE model is intended to be applied in traffic management and transport analysis systems for the evaluation of exposure, as well as the simulation of vehicle emissions in traffic interrupted microenvironments, the bus station model can also be used for the input of initial source definitions in future dispersion models.
Resumo:
Managing the sustainability of urban infrastructure requires regular health monitoring of key infrastructure such as bridges. The process of structural health monitoring involves monitoring a structure over a period of time using appropriate sensors, extracting damage sensitive features from the measurements made by the sensors, and analysing these features to determine the current state of the structure. Various techniques are available for structural health monitoring of structures, and acoustic emission is one technique that is finding an increasing use in the monitoring of civil infrastructures such as bridges. Acoustic emission technique is based on the recording of stress waves generated by rapid release of energy inside a material, followed by analysis of recorded signals to locate and identify the source of emission and assess its severity. This chapter first provides a brief background of the acoustic emission technique and the process of source localization. Results from laboratory experiments conducted to explore several aspects of the source localization process are also presented. The findings from the study can be expected to enhance knowledge of the acoustic emission process, and to aid the development of effective bridge structure diagnostics systems.
Resumo:
In this paper we present a novel distributed coding protocol for multi-user cooperative networks. The proposed distributed coding protocol exploits the existing orthogonal space-time block codes to achieve higher diversity gain by repeating the code across time and space (available relay nodes). The achievable diversity gain depends on the number of relay nodes that can fully decode the signal from the source. These relay nodes then form space-time codes to cooperatively relay to the destination using number of time slots. However, the improved diversity gain is archived at the expense of the transmission rate. The design principles of the proposed space-time distributed code and the issues related to transmission rate and diversity trade off is discussed in detail. We show that the proposed distributed space-time coding protocol out performs existing distributed codes with a variable transmission rate.
Resumo:
Given there is currently a migration trend from traditional electrical supervisory control and data acquisition (SCADA) systems towards a smart grid based approach to critical infrastructure management. This project provides an evaluation of existing and proposed implementations for both traditional electrical SCADA and smart grid based architectures, and proposals a set of reference requirements which test bed implementations should implement. A high-level design for smart grid test beds is proposed and initial implementation performed, based on the proposed design, using open source and freely available software tools. The project examines the move towards smart grid based critical infrastructure management and illustrates the increased security requirements. The implemented test bed provides a basic framework for testing network requirements in a smart grid environment, as well as a platform for further research and development. Particularly to develop, implement and test network security related disturbances such as intrusion detection and network forensics. The project undertaken proposes and develops an architecture of the emulation of some smart grid functionality. The Common Open Research Emulator (CORE) platform was used to emulate the communication network of the smart grid. Specifically CORE was used to virtualise and emulate the TCP/IP networking stack. This is intended to be used for further evaluation and analysis, for example the analysis of application protocol messages, etc. As a proof of concept, software libraries were designed, developed and documented to enable and support the design and development of further smart grid emulated components, such as reclosers, switches, smart meters, etc. As part of the testing and evaluation a Modbus based smart meter emulator was developed to provide basic functionality of a smart meter. Further code was developed to send Modbus request messages to the emulated smart meter and receive Modbus responses from it. Although the functionality of the emulated components were limited, it does provide a starting point for further research and development. The design is extensible to enable the design and implementation of additional SCADA protocols. The project also defines an evaluation criteria for the evaluation of the implemented test bed, and experiments are designed to evaluate the test bed according to the defined criteria. The results of the experiments are collated and presented, and conclusions drawn from the results to facilitate discussion on the test bed implementation. The discussion undertaken also present possible future work.
Resumo:
In this paper, two different high bandwidth converter control strategies are discussed. One of the strategies is for voltage control and the other is for current control. The converter, in each of the cases, is equipped with an output passive filter. For the voltage controller, the converter is equipped with an LC filter, while an output has an LCL filter for current controller. The important aspect that has been discussed the paper is to avoid computation of unnecessary references using high-pass filters in the feedback loop. The stability of the overall system, including the high-pass filters, has been analyzed. The choice of filter parameters is crucial for achieving desirable system performance. In this paper, the bandwidth of achievable performance is presented through frequency (Bode) plot of the system gains. It has been illustrated that the proposed controllers are capable of tracking fundamental frequency components along with low-order harmonic components. Extensive simulation results are presented to validate the control concepts presented in the paper.
Resumo:
Following the completion of the draft Human Genome in 2001, genomic sequence data is becoming available at an accelerating rate, fueled by advances in sequencing and computational technology. Meanwhile, large collections of astronomical and geospatial data have allowed the creation of virtual observatories, accessible throughout the world and requiring only commodity hardware. Through a combination of advances in data management, data mining and visualization, this infrastructure enables the development of new scientific and educational applications as diverse as galaxy classification and real-time tracking of earthquakes and volcanic plumes. In the present paper, we describe steps taken along a similar path towards a virtual observatory for genomes – an immersive three-dimensional visual navigation and query system for comparative genomic data.
Resumo:
Airborne fine particles were collected at a suburban site in Queensland, Australia between 1995 and 2003. The samples were analysed for 21 elements, and Positive Matrix Factorisation (PMF), Preference Ranking Organisation METHods for Enrichment Evaluation (PROMETHEE) and Graphical Analysis for Interactive Assistance (GAIA) were applied to the data. PROMETHEE provided information on the ranking of pollutant levels from the sampling years while PMF provided insights into the sources of the pollutants, their chemical composition, most likely locations and relative contribution to the levels of particulate pollution at the site. PROMETHEE and GAIA found that the removal of lead from fuel in the area had a significant impact on the pollution patterns while PMF identified 6 pollution sources including: Railways (5.5%), Biomass Burning (43.3%), Soil (9.2%), Sea Salt (15.6%), Aged Sea Salt (24.4%) and Motor Vehicles (2.0%). Thus the results gave information that can assist in the formulation of mitigation measures for air pollution.
Resumo:
The open source juggernaut seems to be gaining pace. The open source model certainly has appeal - cutting costs, while at the same time potentially increasing staff and system efficiencies. However, open source poses a number of significant legal challenges and risks for those that incorporate it. Clients need to look carefully before leaping.
Resumo:
Background: Internationally, research on child maltreatment-related injuries has been hampered by a lack of available routinely collected health data to identify cases, examine causes, identify risk factors and explore health outcomes. Routinely collected hospital separation data coded using the International Classification of Diseases and Related Health Problems (ICD) system provide an internationally standardised data source for classifying and aggregating diseases, injuries, causes of injuries and related health conditions for statistical purposes. However, there has been limited research to examine the reliability of these data for child maltreatment surveillance purposes. This study examined the reliability of coding of child maltreatment in Queensland, Australia. Methods: A retrospective medical record review and recoding methodology was used to assess the reliability of coding of child maltreatment. A stratified sample of hospitals across Queensland was selected for this study, and a stratified random sample of cases was selected from within those hospitals. Results: In 3.6% of cases the coders disagreed on whether any maltreatment code could be assigned (definite or possible) versus no maltreatment being assigned (unintentional injury), giving a sensitivity of 0.982 and specificity of 0.948. The review of these cases where discrepancies existed revealed that all cases had some indications of risk documented in the records. 15.5% of cases originally assigned a definite or possible maltreatment code, were recoded to a more or less definite strata. In terms of the number and type of maltreatment codes assigned, the auditor assigned a greater number of maltreatment types based on the medical documentation than the original coder assigned (22% of the auditor coded cases had more than one maltreatment type assigned compared to only 6% of the original coded data). The maltreatment types which were the most ‘under-coded’ by the original coder were psychological abuse and neglect. Cases coded with a sexual abuse code showed the highest level of reliability. Conclusion: Given the increasing international attention being given to improving the uniformity of reporting of child-maltreatment related injuries and the emphasis on the better utilisation of routinely collected health data, this study provides an estimate of the reliability of maltreatment-specific ICD-10-AM codes assigned in an inpatient setting.
Resumo:
"This column is distinguished from previous Impact columns in that it concerns the development tightrope between research and commercial take-up and the role of the LGPL in an open source workflow toolkit produced in a University environment. Many ubiquitous systems have followed this route, (Apache, BSD Unix, ...), and the lessons this Service Oriented Architecture produces cast yet more light on how software diffuses out to impact us all." Michiel van Genuchten and Les Hatton Workflow management systems support the design, execution and analysis of business processes. A workflow management system needs to guarantee that work is conducted at the right time, by the right person or software application, through the execution of a workflow process model. Traditionally, there has been a lack of broad support for a workflow modeling standard. Standardization efforts proposed by the Workflow Management Coalition in the late nineties suffered from limited support for routing constructs. In fact, as later demonstrated by the Workflow Patterns Initiative (www.workflowpatterns.com), a much wider range of constructs is required when modeling realistic workflows in practice. YAWL (Yet Another Workflow Language) is a workflow language that was developed to show that comprehensive support for the workflow patterns is achievable. Soon after its inception in 2002, a prototype system was built to demonstrate that it was possible to have a system support such a complex language. From that initial prototype, YAWL has grown into a fully-fledged, open source workflow management system and support environment
Resumo:
SAP and its research partners have been developing a lan- guage for describing details of Services from various view- points called the Unified Service Description Language (USDL). At the time of writing, version 3.0 describes technical implementation aspects of services, as well as stakeholders, pricing, lifecycle, and availability. Work is also underway to address other business and legal aspects of services. This language is designed to be used in service portfolio management, with a repository of service descriptions being available to various stakeholders in an organisation to allow for service prioritisation, development, deployment and lifecycle management. The structure of the USDL metadata is specified using an object-oriented metamodel that conforms to UML, MOF and EMF Ecore. As such it is amenable to code gener-ation for implementations of repositories that store service description instances. Although Web services toolkits can be used to make these programming language objects available as a set of Web services, the practicalities of writing dis- tributed clients against over one hundred class definitions, containing several hundred attributes, will make for very large WSDL interfaces and highly inefficient “chatty” implementations. This paper gives the high-level design for a completely model-generated repository for any version of USDL (or any other data-only metamodel), which uses the Eclipse Modelling Framework’s Java code generation, along with several open source plugins to create a robust, transactional repository running in a Java application with a relational datastore. However, the repository exposes a generated WSDL interface at a coarse granularity, suitable for distributed client code and user-interface creation. It uses heuristics to drive code generation to bridge between the Web service and EMF granularities.
Resumo:
Research on workforce diversity at the organisational level gained momentum in the 1990s, because of the growing trend in HR research to link HR practices with organisational performance. The new parallel wave of research focused on the business case for diversity, in which diversity was linked to organisational performance. However, the results of these studies, mainly focusing on linear diversity-performance relationships, have been inconsistent. Based on contrasting theories, this paper proposes three competing predictions of the gender diversity-performance relationship at the organisational level: a positive linear relationship derived from the resource-based view of the firm, a negative linear relationship derived from self-categorisation and social identity theories, and a U-shaped curvilinear relationship derived from the integration of the resource-based view of the firm with self-categorisation and social identity theories. The U-shaped relationship accounts for the inconsistent findings in past research, because different proportions of men and women produce different social dynamics that have different effects on organisational performance. Further, the proposed U-shaped relationship can have different slopes in the manufacturing and services industries. The paper contributes to the field of diversity by strengthening its weak theoretical foundations and by highlighting the industry differences.