221 resultados para Convolution Operators


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Up to one-third of people affected by cancer experience ongoing psychological distress and would benefit from screening followed by an appropriate level of psychological intervention. This rarely occurs in routine clinical practice due to barriers such as lack of time and experience. This study investigated the feasibility of community-based telephone helpline operators screening callers affected by cancer for their level of distress using a brief screening tool (Distress Thermometer), and triaging to the appropriate level of care using a tiered model. Methods Consecutive cancer patients and carers who contacted the helpline from September-December 2006 (n = 341) were invited to participate. Routine screening and triage was conducted by helpline operators at this time. Additional socio-demographic and psychosocial adjustment data were collected by telephone interview by research staff following the initial call. Results The Distress Thermometer had good overall accuracy in detecting general psychosocial morbidity (Hospital Anxiety and Depression Scale cut-off score ≥ 15) for cancer patients (AUC = 0.73) and carers (AUC = 0.70). We found 73% of participants met the Distress Thermometer cut-off for distress caseness according to the Hospital Anxiety and Depression Scale (a score ≥ 4), and optimal sensitivity (83%, 77%) and specificity (51%, 48%) were obtained with cut-offs of ≥ 4 and ≥ 6 in the patient and carer groups respectively. Distress was significantly associated with the Hospital Anxiety and Depression Scale scores (total, as well as anxiety and depression subscales) and level of care in cancer patients, as well as with the Hospital Anxiety and Depression Scale anxiety subscale for carers. There was a trend for more highly distressed callers to be triaged to more intensive care, with patients with distress scores ≥ 4 more likely to receive extended or specialist care. Conclusions Our data suggest that it was feasible for community-based cancer helpline operators to screen callers for distress using a brief screening tool, the Distress Thermometer, and to triage callers to an appropriate level of care using a tiered model. The Distress Thermometer is a rapid and non-invasive alternative to longer psychometric instruments, and may provide part of the solution in ensuring distressed patients and carers affected by cancer are identified and supported appropriately.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Signal Processing (SP) is a subject of central importance in engineering and the applied sciences. Signals are information-bearing functions, and SP deals with the analysis and processing of signals (by dedicated systems) to extract or modify information. Signal processing is necessary because signals normally contain information that is not readily usable or understandable, or which might be disturbed by unwanted sources such as noise. Although many signals are non-electrical, it is common to convert them into electrical signals for processing. Most natural signals (such as acoustic and biomedical signals) are continuous functions of time, with these signals being referred to as analog signals. Prior to the onset of digital computers, Analog Signal Processing (ASP) and analog systems were the only tool to deal with analog signals. Although ASP and analog systems are still widely used, Digital Signal Processing (DSP) and digital systems are attracting more attention, due in large part to the significant advantages of digital systems over the analog counterparts. These advantages include superiority in performance,s peed, reliability, efficiency of storage, size and cost. In addition, DSP can solve problems that cannot be solved using ASP, like the spectral analysis of multicomonent signals, adaptive filtering, and operations at very low frequencies. Following the recent developments in engineering which occurred in the 1980's and 1990's, DSP became one of the world's fastest growing industries. Since that time DSP has not only impacted on traditional areas of electrical engineering, but has had far reaching effects on other domains that deal with information such as economics, meteorology, seismology, bioengineering, oceanology, communications, astronomy, radar engineering, control engineering and various other applications. This book is based on the Lecture Notes of Associate Professor Zahir M. Hussain at RMIT University (Melbourne, 2001-2009), the research of Dr. Amin Z. Sadik (at QUT & RMIT, 2005-2008), and the Note of Professor Peter O'Shea at Queensland University of Technology. Part I of the book addresses the representation of analog and digital signals and systems in the time domain and in the frequency domain. The core topics covered are convolution, transforms (Fourier, Laplace, Z. Discrete-time Fourier, and Discrete Fourier), filters, and random signal analysis. There is also a treatment of some important applications of DSP, including signal detection in noise, radar range estimation, banking and financial applications, and audio effects production. Design and implementation of digital systems (such as integrators, differentiators, resonators and oscillators are also considered, along with the design of conventional digital filters. Part I is suitable for an elementary course in DSP. Part II (which is suitable for an advanced signal processing course), considers selected signal processing systems and techniques. Core topics covered are the Hilbert transformer, binary signal transmission, phase-locked loops, sigma-delta modulation, noise shaping, quantization, adaptive filters, and non-stationary signal analysis. Part III presents some selected advanced DSP topics. We hope that this book will contribute to the advancement of engineering education and that it will serve as a general reference book on digital signal processing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a comprehensive planning methodology is proposed that can minimize the line loss, maximize the reliability and improve the voltage profile in a distribution network. The injected active and reactive power of Distributed Generators (DG) and the installed capacitor sizes at different buses and for different load levels are optimally controlled. The tap setting of HV/MV transformer along with the line and transformer upgrading is also included in the objective function. A hybrid optimization method, called Hybrid Discrete Particle Swarm Optimization (HDPSO), is introduced to solve this nonlinear and discrete optimization problem. The proposed HDPSO approach is a developed version of DPSO in which the diversity of the optimizing variables is increased using the genetic algorithm operators to avoid trapping in local minima. The objective function is composed of the investment cost of DGs, capacitors, distribution lines and HV/MV transformer, the line loss, and the reliability. All of these elements are converted into genuine dollars. Given this, a single-objective optimization method is sufficient. The bus voltage and the line current as constraints are satisfied during the optimization procedure. The IEEE 18-bus test system is modified and employed to evaluate the proposed algorithm. The results illustrate the unavoidable need for optimal control on the DG active and reactive power and capacitors in distribution networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we investigate the heuristic construction of bijective s-boxes that satisfy a wide range of cryptographic criteria including algebraic complexity, high nonlinearity, low autocorrelation and have none of the known weaknesses including linear structures, fixed points or linear redundancy. We demonstrate that the power mappings can be evolved (by iterated mutation operators alone) to generate bijective s-boxes with the best known tradeoffs among the considered criteria. The s-boxes found are suitable for use directly in modern encryption algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Characteristics of surveillance video generally include low resolution and poor quality due to environmental, storage and processing limitations. It is extremely difficult for computers and human operators to identify individuals from these videos. To overcome this problem, super-resolution can be used in conjunction with an automated face recognition system to enhance the spatial resolution of video frames containing the subject and narrow down the number of manual verifications performed by the human operator by presenting a list of most likely candidates from the database. As the super-resolution reconstruction process is ill-posed, visual artifacts are often generated as a result. These artifacts can be visually distracting to humans and/or affect machine recognition algorithms. While it is intuitive that higher resolution should lead to improved recognition accuracy, the effects of super-resolution and such artifacts on face recognition performance have not been systematically studied. This paper aims to address this gap while illustrating that super-resolution allows more accurate identification of individuals from low-resolution surveillance footage. The proposed optical flow-based super-resolution method is benchmarked against Baker et al.’s hallucination and Schultz et al.’s super-resolution techniques on images from the Terrascope and XM2VTS databases. Ground truth and interpolated images were also tested to provide a baseline for comparison. Results show that a suitable super-resolution system can improve the discriminability of surveillance video and enhance face recognition accuracy. The experiments also show that Schultz et al.’s method fails when dealing surveillance footage due to its assumption of rigid objects in the scene. The hallucination and optical flow-based methods performed comparably, with the optical flow-based method producing less visually distracting artifacts that interfered with human recognition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Web has become a worldwide repository of information which individuals, companies, and organizations utilize to solve or address various information problems. Many of these Web users utilize automated agents to gather this information for them. Some assume that this approach represents a more sophisticated method of searching. However, there is little research investigating how Web agents search for online information. In this research, we first provide a classification for information agent using stages of information gathering, gathering approaches, and agent architecture. We then examine an implementation of one of the resulting classifications in detail, investigating how agents search for information on Web search engines, including the session, query, term, duration and frequency of interactions. For this temporal study, we analyzed three data sets of queries and page views from agents interacting with the Excite and AltaVista search engines from 1997 to 2002, examining approximately 900,000 queries submitted by over 3,000 agents. Findings include: (1) agent sessions are extremely interactive, with sometimes hundreds of interactions per second (2) agent queries are comparable to human searchers, with little use of query operators, (3) Web agents are searching for a relatively limited variety of information, wherein only 18% of the terms used are unique, and (4) the duration of agent-Web search engine interaction typically spans several hours. We discuss the implications for Web information agents and search engines.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biomarker analysis has been implemented in sports research in an attempt to monitor the effects of exertion and fatigue in athletes. This study proposed that while such biomarkers may be useful for monitoring injury risk in workers, proteomic approaches might also be utilised to identify novel exertion or injury markers. We found that urinary urea and cortisol levels were significantly elevated in mining workers following a 12 hour overnight shift. These levels failed to return to baseline over 24h in the more active maintenance crew compared to truck drivers (operators) suggesting a lack of recovery between shifts. Use of a SELDI-TOF MS approach to detect novel exertion or injury markers revealed a spectral feature which was associated with workers in both work categories who were engaged in higher levels of physical activity. This feature was identified as the LG3 peptide, a C-terminal fragment of the anti-angiogenic / anti-tumourigenic protein endorepellin. This finding suggests that urinary LG3 peptide may be a biomarker of physical activity. It is also possible that the activity mediated release of LG3 / endorepellin into the circulation may represent a biological mechanism for the known inverse association between physical activity and cancer risk / survival.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Deterministic transit capacity analysis applies to planning, design and operational management of urban transit systems. The Transit Capacity and Quality of Service Manual (1) and Vuchic (2, 3) enable transit performance to be quantified and assessed using transit capacity and productive capacity. This paper further defines important productive performance measures of an individual transit service and transit line. Transit work (p-km) captures the transit task performed over distance. Passenger transmission (p-km/h) captures the passenger task delivered by service at speed. Transit productiveness (p-km/h) captures transit work performed over time. These measures are useful to operators in understanding their services’ or systems’ capabilities and passenger quality of service. This paper accounts for variability in utilized demand by passengers along a line and high passenger load conditions where passenger pass-up delay occurs. A hypothetical case study of an individual bus service’s operation demonstrates the usefulness of passenger transmission in comparing existing and growth scenarios. A hypothetical case study of a bus line’s operation during a peak hour window demonstrates the theory’s usefulness in examining the contribution of individual services to line productive performance. Scenarios may be assessed using this theory to benchmark or compare lines and segments, conditions, or consider improvements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The current regulatory approach to coal seam gas projects in Queensland is based on the philosophy of adaptive environmental management. This method of “learning by doing” is implemented in Queensland primarily through the imposition of layered monitoring and reporting duties on the coal seam gas operator alongside obligations to compensate and “make good” harm caused. The purpose of this article is to provide a critical review of the Queensland regulatory approach to the approval and minimisation of adverse impacts from coal seam gas activities. Following an overview of the hallmarks of an effective adaptive management approach, this article begins by addressing the mosaic of approval processes and impact assessment regimes that may apply to coal seam gas projects. This includes recent Strategic Cropping Land reforms. This article then turns to consider the preconditions for land access in Queensland and the emerging issues for landholders relating to the negotiation of access and compensation agreements. This article then undertakes a critical review of the environmental duties imposed on coal seam gas operators relating to hydraulic fracturing, well head leaks, groundwater management and the disposal and beneficial use of produced water. Finally, conclusions are drawn regarding the overall effectiveness of the Queensland framework and the lessons that may be drawn from Queensland’s adaptive environmental management approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article presents a critical analysis of the current and proposed CCS legal frameworks across a number of jurisdictions in Australia in order to examine the legal treatment of the risks of carbon leakage from CCS operations. It does so through an analysis of the statutory obligations and liability rules established under the offshore Commonwealth and Victorian regimes, and onshore Queensland and Victorian legislative frameworks. Exposure draft legislation for CCS laws in Western Australia is also examined. In considering where the losses will fall in the event of leakage, the potential tortious and statutory liabilities of private operators and the State are addressed alongside the operation of statutory protections from liability. The current legal treatment of CCS under the new Australian Carbon Pricing Mechanism is also critiqued.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Airports represent the epitome of complex systems with multiple stakeholders, multiple jurisdictions and complex interactions between many actors. The large number of existing models that capture different aspects of the airport are a testament to this. However, these existing models do not consider in a systematic sense modelling requirements nor how stakeholders such as airport operators or airlines would make use of these models. This can detrimentally impact on the verification and validation of models and makes the development of extensible and reusable modelling tools difficult. This paper develops from the Concept of Operations (CONOPS) framework a methodology to help structure the review and development of modelling capabilities and usage scenarios. The method is applied to the review of existing airport terminal passenger models. It is found that existing models can be broadly categorised according to four usage scenarios: capacity planning, operational planning and design, security policy and planning, and airport performance review. The models, the performance metrics that they evaluate and their usage scenarios are discussed. It is found that capacity and operational planning models predominantly focus on performance metrics such as waiting time, service time and congestion whereas performance review models attempt to link those to passenger satisfaction outcomes. Security policy models on the other hand focus on probabilistic risk assessment. However, there is an emerging focus on the need to be able to capture trade-offs between multiple criteria such as security and processing time. Based on the CONOPS framework and literature findings, guidance is provided for the development of future airport terminal models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A Multimodal Seaport Container Terminal (MSCT) is a complex system which requires careful planning and control in order to operate efficiently. It consists of a number of subsystems that require optimisation of the operations within them, as well as synchronisation of machines and containers between the various subsystems. Inefficiency in the terminal can delay ships from their scheduled timetables, as well as cause delays in delivering containers to their inland destinations, both of which can be very costly to their operators. The purpose of this PhD thesis is to use Operations Research methodologies to optimise and synchronise these subsystems as an integrated application. An initial model is developed for the overall MSCT; however, due to a large number of assumptions that had to be made, as well as other issues, it is found to be too inaccurate and infeasible for practical use. Instead, a method of developing models for each subsystem is proposed that then be integrated with each other. Mathematical models are developed for the Storage Area System (SAS) and Intra-terminal Transportation System (ITTS). The SAS deals with the movement and assignment of containers to stacks within the storage area, both when they arrive and when they are rehandled to retrieve containers below them. The ITTS deals with scheduling the movement of containers and machines between the storage areas and other sections of the terminal, such as the berth and road/rail terminals. Various constructive heuristics are explored and compared for these models to produce good initial solutions for large-sized problems, which are otherwise impractical to compute by exact methods. These initial solutions are further improved through the use of an innovative hyper-heuristic algorithm that integrates the SAS and ITTS solutions together and optimises them through meta-heuristic techniques. The method by which the two models can interact with each other as an integrated system will be discussed, as well as how this method can be extended to the other subsystems of the MSCT.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Retrofit projects are different from newly-built projects in many respects. A retrofit project involves an existing building, which imposes constraints on the owners, designers, operators and constructors throughout the project process. Retrofit projects are risky, complex, less predictable and difficult to be well planned, which need greater coordination. For office building retrofit project, further restrictions will apply as these buildings often locate in CBD areas and most have to remain operational during the progression of project work. Issues such as site space, material storage and handling, noise and dust, need to be considered and well addressed. In this context, waste management is even more challenging with small spaces for waste handling, uncertainties in waste control, and impact of waste management activities on project delivery and building occupants. Current literatures on waste management in office building retrofit projects focus on increasing waste recovery rate based on project planning, monitoring and stakeholders’ collaboration. However, previous research has not produced knowledge of understanding the particular retrofit processes and their impact on waste generation and management. This paper discusses the interim results of a continuing research on new strategies for waste management in office building retrofit projects. Firstly based on the literature review, it summarizes the unique characteristics of office building retrofit projects and their influence on waste management. An assumption on waste management strategies is formed. Semi-structured interviews were conducted towards industry practitioners and findings are then presented in the paper. The assumption of the research was validated in the interviews from the opinions and experiences of the respondents. Finally the research develops a process model for waste management in office building retrofit projects. It introduces two different waste management strategies. For the dismantling phase, waste is generated fast along with the work progress, so integrated planning for project delivery and waste generation is needed in order to organize prompt handling and treatment. For the fit-out phase, the work is similar as new construction. Factors which are particularly linked to generating waste on site need to be controlled and monitored. Continuing research in this space will help improve the practice of waste management in office building retrofit projects. The new strategies will help promote the practicality of project waste planning and management and stakeholders’ capability of coordinating waste management and project delivery.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fractional differential equations are becoming more widely accepted as a powerful tool in modelling anomalous diffusion, which is exhibited by various materials and processes. Recently, researchers have suggested that rather than using constant order fractional operators, some processes are more accurately modelled using fractional orders that vary with time and/or space. In this paper we develop computationally efficient techniques for solving time-variable-order time-space fractional reaction-diffusion equations (tsfrde) using the finite difference scheme. We adopt the Coimbra variable order time fractional operator and variable order fractional Laplacian operator in space where both orders are functions of time. Because the fractional operator is nonlocal, it is challenging to efficiently deal with its long range dependence when using classical numerical techniques to solve such equations. The novelty of our method is that the numerical solution of the time-variable-order tsfrde is written in terms of a matrix function vector product at each time step. This product is approximated efficiently by the Lanczos method, which is a powerful iterative technique for approximating the action of a matrix function by projecting onto a Krylov subspace. Furthermore an adaptive preconditioner is constructed that dramatically reduces the size of the required Krylov subspaces and hence the overall computational cost. Numerical examples, including the variable-order fractional Fisher equation, are presented to demonstrate the accuracy and efficiency of the approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this chapter is to provide rail practitioners with a practical approach for determining safety requirements of low-cost level crossing warning devices (LCLCWDs) on an Australian railway by way of a case study. LCLCWDs, in theory, allow railway operators to improve the safety of passively controlled crossing by upgrading a larger number of level crossings with the same budget that would otherwise be used to upgrade these using the conventional active level crossing control technologies, e.g. track circuit initiated flashing light systems. The chapter discusses the experience and obstacles of adopting LCLCWDs in Australia, and demonstrates how the risk-based approach may be used to make the case for LCLCWDs.