638 resultados para Multiple generation scenarios
Resumo:
Transit agencies across the world are increasingly shifting their fare collection mechanisms towards fully automated systems like the smart card. One of the objectives in implementing such a system is to reduce the boarding time per passenger and hence reduce the overall dwell time for the buses at the bus stops/bus rapid transit (BRT) stations. TransLink, the transit authority responsible for public transport management in South East Queensland, has introduced ‘GoCard’ technology using the Cubic platform for fare collection on its public transport system. In addition to this, three inner city BRT stations on South East Busway spine are operating as pre-paid platforms during evening peak time. This paper evaluates the effects of these multiple policy measures on operation of study busway station. The comparison between pre and post policy scenarios suggests that though boarding time per passenger has decreased, while the alighting time per passenger has increased slightly. However, there is a substantial reduction in operating efficiency was observed at the station.
Resumo:
This paper proposes a novel relative entropy rate (RER) based approach for multiple HMM (MHMM) approximation of a class of discrete-time uncertain processes. Under different uncertainty assumptions, the model design problem is posed either as a min-max optimisation problem or stochastic minimisation problem on the RER between joint laws describing the state and output processes (rather than the more usual RER between output processes). A suitable filter is proposed for which performance results are established which bound conditional mean estimation performance and show that estimation performance improves as the RER is reduced. These filter consistency and convergence bounds are the first results characterising multiple HMM approximation performance and suggest that joint RER concepts provide a useful model selection criteria. The proposed model design process and MHMM filter are demonstrated on an important image processing dim-target detection problem.
Resumo:
Forensic analysis requires the acquisition and management of many different types of evidence, including individual disk drives, RAID sets, network packets, memory images, and extracted files. Often the same evidence is reviewed by several different tools or examiners in different locations. We propose a backwards-compatible redesign of the Advanced Forensic Formatdan open, extensible file format for storing and sharing of evidence, arbitrary case related information and analysis results among different tools. The new specification, termed AFF4, is designed to be simple to implement, built upon the well supported ZIP file format specification. Furthermore, the AFF4 implementation has downward comparability with existing AFF files.
Resumo:
We investigate Multiple-Input and Multiple-Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) systems behavior in indoor populated environments that have line-of-site (LoS) between transmitter and receiver arrays. The in-house built MIMO-OFDM packet transmission demonstrator, equipped with four transmitters and four receivers, has been utilized to perform channel measurements at 5.2 GHz. Measurements have been performed using 0 to 3 pedestrians with different antenna arrays (2 £ 2, 3 £ 3 and 4 £ 4). The maximum average capacity for the 2x2 deterministic Fixed SNR scenario is 8.5 dB compared to the 4x4 deterministic scenario that has a maximum average capacity of 16.2 dB, thus an increment of 8 dB in average capacity has been measured when the array size increases from 2x2 to 4x4. In addition a regular variation has been observed for Random scenarios compared to the deterministic scenarios. An incremental trend in average channel capacity for both deterministic and random pedestrian movements has been observed with increasing number of pedestrian and antennas. In deterministic scenarios, the variations in average channel capacity are more noticeable than for the random scenarios due to a more prolonged and controlled body-shadowing effect. Moreover due to the frequent Los blocking and fixed transmission power a slight decrement have been observed in the spread between the maximum and minimum capacity with random fixed Tx power scenario.
Resumo:
In recent times, complaining about the Y Generation and its perceived lack of work ethic has become standard dinner-party conversation amongst Baby Boomers. Discussions in the popular press (Salt, 2008) and amongst some social commentators (Levy, Carroll, Francoeur, & Logue, 2003) indicate that the group labelled Gen Y have distinct and different generational characteristics. Whether or not the differences are clearly delineated on age is still open to discussion but in the introduction to "The Generational Mirage? a pilot study into the perceptions of leadership by Generation X and Y", Levy et al. argue that "the calibre of leadership in competing organisations and the way they value new and existing employees will play a substantial role in attracting or discouraging these workers regardless of generational labels". Kunreuther's (2002) suggests that the difference between younger workers and their older counterparts may have more to do with situational phenomena and their position in the life cycle than deeper generational difference. However this is still an issue for leadership in schools.
Resumo:
The purpose of this study was to examine the impact of pain on functioning across multiple quality of life (QOL) domains among individuals with multiple sclerosis (MS). A total of 219 people were recruited from a regional MS society membership database to serve as the community-based study sample. All participants completed a questionnaire containing items about their demographic and clinical characteristics, validated measures of QOL and MS-related disability, and a question on whether or not they had experienced clinically significant pain in the preceding 2 weeks. Respondents who reported pain then completed an in-person structured pain interview assessing pain characteristics (intensity, quality, location, extent, and duration). Comparisons between participants with and without MS-related pain demonstrated that pain prevalence and intensity were strongly correlated with QOL: physical health, psychological health, level of independence, and global QOL were more likely to be impaired among people with MS when pain was present, and the extent of impairment was associated with the intensity of pain. Moreover, these relationships remained significant even after statistically controlling for multiple demographic and clinical covariates associated with self-reported QOL. These findings suggest that for people with MS, pain is an important source of distress and disability beyond that caused by neurologic impairments.
Resumo:
Benefit finding is a meaning making construct that has been shown to be related to adjustment in people with MS and their carers. This study investigated the dimensions, stability and potency of benefit finding in predicting adjustment over a 12 month interval using a newly developed Benefit Finding in Multiple Sclerosis Scale (BFiMSS). Usable data from 388 persons with MS and 232 carers was obtained from questionnaires completed at Time 1 and 12 months later (Time 2). Factor analysis of the BFiMSS revealed seven psychometrically sound factors: Compassion/Empathy, Spiritual Growth, Mindfulness, Family Relations Growth, Life Style Gains, Personal Growth, New Opportunities. BFiMSS total and factors showed satisfactory internal and retest reliability coefficients, and convergent, criterion and external validity. Results of regression analyses indicated that the Time 1 BFiMSS factors accounted for significant amounts of variance in each of the Time 2 adjustment outcomes (positive states of mind, positive affect, anxiety, depression) after controlling for Time 1 adjustment, and relevant demographic and illness variables. Findings delineate the dimensional structure of benefit finding in MS, the differential links between benefit finding dimensions and adjustment and the temporal unfolding of benefit finding in chronic illness.
Resumo:
In public venues, crowd size is a key indicator of crowd safety and stability. Crowding levels can be detected using holistic image features, however this requires a large amount of training data to capture the wide variations in crowd distribution. If a crowd counting algorithm is to be deployed across a large number of cameras, such a large and burdensome training requirement is far from ideal. In this paper we propose an approach that uses local features to count the number of people in each foreground blob segment, so that the total crowd estimate is the sum of the group sizes. This results in an approach that is scalable to crowd volumes not seen in the training data, and can be trained on a very small data set. As a local approach is used, the proposed algorithm can easily be used to estimate crowd density throughout different regions of the scene and be used in a multi-camera environment. A unique localised approach to ground truth annotation reduces the required training data is also presented, as a localised approach to crowd counting has different training requirements to a holistic one. Testing on a large pedestrian database compares the proposed technique to existing holistic techniques and demonstrates improved accuracy, and superior performance when test conditions are unseen in the training set, or a minimal training set is used.
Resumo:
Real-Time Kinematic (RTK) positioning is a technique used to provide precise positioning services at centimetre accuracy level in the context of Global Navigation Satellite Systems (GNSS). While a Network-based RTK (N-RTK) system involves multiple continuously operating reference stations (CORS), the simplest form of a NRTK system is a single-base RTK. In Australia there are several NRTK services operating in different states and over 1000 single-base RTK systems to support precise positioning applications for surveying, mining, agriculture, and civil construction in regional areas. Additionally, future generation GNSS constellations, including modernised GPS, Galileo, GLONASS, and Compass, with multiple frequencies have been either developed or will become fully operational in the next decade. A trend of future development of RTK systems is to make use of various isolated operating network and single-base RTK systems and multiple GNSS constellations for extended service coverage and improved performance. Several computational challenges have been identified for future NRTK services including: • Multiple GNSS constellations and multiple frequencies • Large scale, wide area NRTK services with a network of networks • Complex computation algorithms and processes • Greater part of positioning processes shifting from user end to network centre with the ability to cope with hundreds of simultaneous users’ requests (reverse RTK) There are two major requirements for NRTK data processing based on the four challenges faced by future NRTK systems, expandable computing power and scalable data sharing/transferring capability. This research explores new approaches to address these future NRTK challenges and requirements using the Grid Computing facility, in particular for large data processing burdens and complex computation algorithms. A Grid Computing based NRTK framework is proposed in this research, which is a layered framework consisting of: 1) Client layer with the form of Grid portal; 2) Service layer; 3) Execution layer. The user’s request is passed through these layers, and scheduled to different Grid nodes in the network infrastructure. A proof-of-concept demonstration for the proposed framework is performed in a five-node Grid environment at QUT and also Grid Australia. The Networked Transport of RTCM via Internet Protocol (Ntrip) open source software is adopted to download real-time RTCM data from multiple reference stations through the Internet, followed by job scheduling and simplified RTK computing. The system performance has been analysed and the results have preliminarily demonstrated the concepts and functionality of the new NRTK framework based on Grid Computing, whilst some aspects of the performance of the system are yet to be improved in future work.
Resumo:
Neurodegenerative disorders are heterogenous in nature and include a range of ataxias with oculomotor apraxia, which are characterised by a wide variety of neurological and ophthalmological features. This family includes recessive and dominant disorders. A subfamily of autosomal recessive cerebellar ataxias are characterised by defects in the cellular response to DNA damage. These include the well characterised disorders Ataxia-Telangiectasia (A-T) and Ataxia-Telangiectasia Like Disorder (A-TLD) as well as the recently identified diseases Spinocerebellar ataxia with axonal neuropathy Type 1 (SCAN1), Ataxia with Oculomotor Apraxia Type 2 (AOA2), as well as the subject of this thesis, Ataxia with Oculomotor Apraxia Type 1 (AOA1). AOA1 is caused by mutations in the APTX gene, which is located at chromosomal locus 9p13. This gene codes for the 342 amino acid protein Aprataxin. Mutations in APTX cause destabilization of Aprataxin, thus AOA1 is a result of Aprataxin deficiency. Aprataxin has three functional domains, an N-terminal Forkhead Associated (FHA) phosphoprotein interaction domain, a central Histidine Triad (HIT) nucleotide hydrolase domain and a C-terminal C2H2 zinc finger. Aprataxins FHA domain has homology to FHA domain of the DNA repair protein 5’ polynucleotide kinase 3’ phosphatase (PNKP). PNKP interacts with a range of DNA repair proteins via its FHA domain and plays a critical role in processing damaged DNA termini. The presence of this domain with a nucleotide hydrolase domain and a DNA binding motif implicated that Aprataxin may be involved in DNA repair and that AOA1 may be caused by a DNA repair deficit. This was substantiated by the interaction of Aprataxin with proteins involved in the repair of both single and double strand DNA breaks (XRay Cross-Complementing 1, XRCC4 and Poly-ADP Ribose Polymerase-1) and the hypersensitivity of AOA1 patient cell lines to single and double strand break inducing agents. At the commencement of this study little was known about the in vitro and in vivo properties of Aprataxin. Initially this study focused on generation of recombinant Aprataxin proteins to facilitate examination of the in vitro properties of Aprataxin. Using recombinant Aprataxin proteins I found that Aprataxin binds to double stranded DNA. Consistent with a role for Aprataxin as a DNA repair enzyme, this binding is not sequence specific. I also report that the HIT domain of Aprataxin hydrolyses adenosine derivatives and interestingly found that this activity is competitively inhibited by DNA. This provided initial evidence that DNA binds to the HIT domain of Aprataxin. The interaction of DNA with the nucleotide hydrolase domain of Aprataxin provided initial evidence that Aprataxin may be a DNA-processing factor. Following these studies, Aprataxin was found to hydrolyse 5’adenylated DNA, which can be generated by unscheduled ligation at DNA breaks with non-standard termini. I found that cell extracts from AOA1 patients do not have DNA-adenylate hydrolase activity indicating that Aprataxin is the only DNA-adenylate hydrolase in mammalian cells. I further characterised this activity by examining the contribution of the zinc finger and FHA domains to DNA-adenylate hydrolysis by the HIT domain. I found that deletion of the zinc finger ablated the activity of the HIT domain against adenylated DNA, indicating that the zinc finger may be required for the formation of a stable enzyme-substrate complex. Deletion of the FHA domain stimulated DNA-adenylate hydrolysis, which indicated that the activity of the HIT domain may be regulated by the FHA domain. Given that the FHA domain is involved in protein-protein interactions I propose that the activity of Aprataxins HIT domain may be regulated by proteins which interact with its FHA domain. We examined this possibility by measuring the DNA-adenylate hydrolase activity of extracts from cells deficient for the Aprataxin-interacting DNA repair proteins XRCC1 and PARP-1. XRCC1 deficiency did not affect Aprataxin activity but I found that Aprataxin is destabilized in the absence of PARP-1, resulting in a deficiency of DNA-adenylate hydrolase activity in PARP-1 knockout cells. This implies a critical role for PARP-1 in the stabilization of Aprataxin. Conversely I found that PARP-1 is destabilized in the absence of Aprataxin. PARP-1 is a central player in a number of DNA repair mechanisms and this implies that not only do AOA1 cells lack Aprataxin, they may also have defects in PARP-1 dependant cellular functions. Based on this I identified a defect in a PARP-1 dependant DNA repair mechanism in AOA1 cells. Additionally, I identified elevated levels of oxidized DNA in AOA1 cells, which is indicative of a defect in Base Excision Repair (BER). I attribute this to the reduced level of the BER protein Apurinic Endonuclease 1 (APE1) I identified in Aprataxin deficient cells. This study has identified and characterised multiple DNA repair defects in AOA1 cells, indicating that Aprataxin deficiency has far-reaching cellular consequences. Consistent with the literature, I show that Aprataxin is a nuclear protein with nucleoplasmic and nucleolar distribution. Previous studies have shown that Aprataxin interacts with the nucleolar rRNA processing factor nucleolin and that AOA1 cells appear to have a mild defect in rRNA synthesis. Given the nucleolar localization of Aprataxin I examined the protein-protein interactions of Aprataxin and found that Aprataxin interacts with a number of rRNA transcription and processing factors. Based on this and the nucleolar localization of Aprataxin I proposed that Aprataxin may have an alternative role in the nucleolus. I therefore examined the transcriptional activity of Aprataxin deficient cells using nucleotide analogue incorporation. I found that AOA1 cells do not display a defect in basal levels of RNA synthesis, however they display defective transcriptional responses to DNA damage. In summary, this thesis demonstrates that Aprataxin is a DNA repair enzyme responsible for the repair of adenylated DNA termini and that it is required for stabilization of at least two other DNA repair proteins. Thus not only do AOA1 cells have no Aprataxin protein or activity, they have additional deficiencies in PolyADP Ribose Polymerase-1 and Apurinic Endonuclease 1 dependant DNA repair mechanisms. I additionally demonstrate DNA-damage inducible transcriptional defects in AOA1 cells, indicating that Aprataxin deficiency confers a broad range of cellular defects and highlighting the complexity of the cellular response to DNA damage and the multiple defects which result from Aprataxin deficiency. My detailed characterization of the cellular consequences of Aprataxin deficiency provides an important contribution to our understanding of interlinking DNA repair processes.
Resumo:
Since the 1980s, industries and researchers have sought to better understand the quality of services due to the rise in their importance (Brogowicz, Delene and Lyth 1990). More recent developments with online services, coupled with growing recognition of service quality (SQ) as a key contributor to national economies and as an increasingly important competitive differentiator, amplify the need to revisit our understanding of SQ and its measurement. Although ‘SQ’ can be broadly defined as “a global overarching judgment or attitude relating to the overall excellence or superiority of a service” (Parasuraman, Berry and Zeithaml 1988), the term has many interpretations. There has been considerable progress on how to measure SQ perceptions, but little consensus has been achieved on what should be measured. There is agreement that SQ is multi-dimensional, but little agreement as to the nature or content of these dimensions (Brady and Cronin 2001). For example, within the banking sector, there exist multiple SQ models, each consisting of varying dimensions. The existence of multiple conceptions and the lack of a unifying theory bring the credibility of existing conceptions into question, and beg the question of whether it is possible at some higher level to define SQ broadly such that it spans all service types and industries. This research aims to explore the viability of a universal conception of SQ, primarily through a careful re-visitation of the services and SQ literature. The study analyses the strengths and weaknesses of the highly regarded and widely used global SQ model (SERVQUAL) which reflects a single-level approach to SQ measurement. The SERVQUAL model states that customers evaluate SQ (of each service encounter) based on five dimensions namely reliability, assurance, tangibles, empathy and responsibility. SERVQUAL, however, failed to address what needs to be reliable, assured, tangible, empathetic and responsible. This research also addresses a more recent global SQ model from Brady and Cronin (2001); the B&C (2001) model, that has potential to be the successor of SERVQUAL in that it encompasses other global SQ models and addresses the ‘what’ questions that SERVQUAL didn’t. The B&C (2001) model conceives SQ as being multidimensional and multi-level; this hierarchical approach to SQ measurement better reflecting human perceptions. In-line with the initial intention of SERVQUAL, which was developed to be generalizable across industries and service types, this research aims to develop a conceptual understanding of SQ, via literature and reflection, that encompasses the content/nature of factors related to SQ; and addresses the benefits and weaknesses of various SQ measurement approaches (i.e. disconfirmation versus perceptions-only). Such understanding of SQ seeks to transcend industries and service types with the intention of extending our knowledge of SQ and assisting practitioners in understanding and evaluating SQ. The candidate’s research has been conducted within, and seeks to contribute to, the ‘IS-Impact’ research track of the IT Professional Services (ITPS) Research Program at QUT. The vision of the track is “to develop the most widely employed model for benchmarking Information Systems in organizations for the joint benefit of research and practice.” The ‘IS-Impact’ research track has developed an Information Systems (IS) success measurement model, the IS-Impact Model (Gable, Sedera and Chan 2008), which seeks to fulfill the track’s vision. Results of this study will help future researchers in the ‘IS-Impact’ research track address questions such as: • Is SQ an antecedent or consequence of the IS-Impact model or both? • Has SQ already been addressed by existing measures of the IS-Impact model? • Is SQ a separate, new dimension of the IS-Impact model? • Is SQ an alternative conception of the IS? Results from the candidate’s research suggest that SQ dimensions can be classified at a higher level which is encompassed by the B&C (2001) model’s 3 primary dimensions (interaction, physical environment and outcome). The candidate also notes that it might be viable to re-word the ‘physical environment quality’ primary dimension to ‘environment quality’ so as to better encompass both physical and virtual scenarios (E.g: web sites). The candidate does not rule out the global feasibility of the B&C (2001) model’s nine sub-dimensions, however, acknowledges that more work has to be done to better define the sub-dimensions. The candidate observes that the ‘expertise’, ‘design’ and ‘valence’ sub-dimensions are supportive representations of the ‘interaction’, physical environment’ and ‘outcome’ primary dimensions respectively. The latter statement suggests that customers evaluate each primary dimension (or each higher level of SQ classification) namely ‘interaction’, physical environment’ and ‘outcome’ based on the ‘expertise’, ‘design’ and ‘valence’ sub-dimensions respectively. The ability to classify SQ dimensions at a higher level coupled with support for the measures that make up this higher level, leads the candidate to propose the B&C (2001) model as a unifying theory that acts as a starting point to measuring SQ and the SQ of IS. The candidate also notes, in parallel with the continuing validation and generalization of the IS-Impact model, that there is value in alternatively conceptualizing the IS as a ‘service’ and ultimately triangulating measures of IS SQ with the IS-Impact model. These further efforts are beyond the scope of the candidate’s study. Results from the candidate’s research also suggest that both the disconfirmation and perceptions-only approaches have their merits and the choice of approach would depend on the objective(s) of the study. Should the objective(s) be an overall evaluation of SQ, the perceptions-only approached is more appropriate as this approach is more straightforward and reduces administrative overheads in the process. However, should the objective(s) be to identify SQ gaps (shortfalls), the (measured) disconfirmation approach is more appropriate as this approach has the ability to identify areas that need improvement.
Resumo:
When complex projects go wrong they can go horribly wrong with severe financial consequences. We are undertaking research to develop leading performance indicators for complex projects, metrics to provide early warning of potential difficulties. The assessment of success of complex projects can be made by a range of stakeholders over different time scales, against different levels of project results: the project’s outputs at the end of the project; the project’s outcomes in the months following project completion; and the project’s impact in the years following completion. We aim to identify leading performance indicators, which may include both success criteria and success factors, and which can be measured by the project team during project delivery to forecast success as assessed by key stakeholders in the days, months and years following the project. The hope is the leading performance indicators will act as alarm bells to show if a project is diverting from plan so early corrective action can be taken. It may be that different combinations of the leading performance indicators will be appropriate depending on the nature of project complexity. In this paper we develop a new model of project success, whereby success is assessed by different stakeholders over different time frames against different levels of project results. We then relate this to measurements that can be taken during project delivery. A methodology is described to evaluate the early parts of this model. Its implications and limitations are described. This paper describes work in progress.