341 resultados para linear feedback shift register
Resumo:
Motorcyclists are the most crash-prone road-user group in many Asian countries including Singapore; however, factors influencing motorcycle crashes are still not well understood. This study examines the effects of various roadway characteristics, traffic control measures and environmental factors on motorcycle crashes at different location types including expressways and intersections. Using techniques of categorical data analysis, this study has developed a set of log-linear models to investigate multi-vehicle motorcycle crashes in Singapore. Motorcycle crash risks in different circumstances have been calculated after controlling for the exposure estimated by the induced exposure technique. Results show that night-time influence increases crash risks of motorcycles particularly during merging and diverging manoeuvres on expressways, and turning manoeuvres at intersections. Riders appear to exercise more care while riding on wet road surfaces particularly during night. Many hazardous interactions at intersections tend to be related to the failure of drivers to notice a motorcycle as well as to judge correctly the speed/distance of an oncoming motorcycle. Road side conflicts due to stopping/waiting vehicles and interactions with opposing traffic on undivided roads have been found to be as detrimental factors on motorcycle safety along arterial, main and local roads away from intersections. Based on the findings of this study, several targeted countermeasures in the form of legislations, rider training, and safety awareness programmes have been recommended.
Resumo:
The intensity pulsations of a cw 1030 nm Yb:Phosphate monolithic waveguide laser with distributed feedback are described. We show that the pulsations could result from the coupling of the two orthogonal polarization modes through the two photon process of cooperative luminescence. The predictions of the presented theoretical model agree well with the observed behaviour.
Resumo:
Hybrid system representations have been exploited in a number of challenging modelling situations, including situations where the original nonlinear dynamics are too complex (or too imprecisely known) to be directly filtered. Unfortunately, the question of how to best design suitable hybrid system models has not yet been fully addressed, particularly in the situations involving model uncertainty. This paper proposes a novel joint state-measurement relative entropy rate based approach for design of hybrid system filters in the presence of (parameterised) model uncertainty. We also present a design approach suitable for suboptimal hybrid system filters. The benefits of our proposed approaches are illustrated through design examples and simulation studies.
Resumo:
It is a big challenge to clearly identify the boundary between positive and negative streams. Several attempts have used negative feedback to solve this challenge; however, there are two issues for using negative relevance feedback to improve the effectiveness of information filtering. The first one is how to select constructive negative samples in order to reduce the space of negative documents. The second issue is how to decide noisy extracted features that should be updated based on the selected negative samples. This paper proposes a pattern mining based approach to select some offenders from the negative documents, where an offender can be used to reduce the side effects of noisy features. It also classifies extracted features (i.e., terms) into three categories: positive specific terms, general terms, and negative specific terms. In this way, multiple revising strategies can be used to update extracted features. An iterative learning algorithm is also proposed to implement this approach on RCV1, and substantial experiments show that the proposed approach achieves encouraging performance.
Resumo:
A major challenge in modern photonics and nano-optics is the diffraction limit of light which does not allow field localisation into regions with dimensions smaller than half the wavelength. Localisation of light into nanoscale regions (beyond its diffraction limit) has applications ranging from the design of optical sensors and measurement techniques with resolutions as high as a few nanometres, to the effective delivery of optical energy into targeted nanoscale regions such as quantum dots, nano-electronic and nano-optical devices. This field has become a major research direction over the last decade. The use of strongly localised surface plasmons in metallic nanostructures is one of the most promising approaches to overcome this problem. Therefore, the aim of this thesis is to investigate the linear and non-linear propagation of surface plasmons in metallic nanostructures. This thesis will focus on two main areas of plasmonic research –– plasmon nanofocusing and plasmon nanoguiding. Plasmon nanofocusing – The main aim of plasmon nanofocusing research is to focus plasmon energy into nanoscale regions using metallic nanostructures and at the same time achieve strong local field enhancement. Various structures for nanofocusing purposes have been proposed and analysed such as sharp metal wedges, tapered metal films on dielectric substrates, tapered metal rods, and dielectric V-grooves in metals. However, a number of important practical issues related to nanofocusing in these structures still remain unclear. Therefore, one of the main aims of this thesis is to address two of the most important of issues which are the coupling efficiency and heating effects of surface plasmons in metallic nanostructures. The method of analysis developed throughout this thesis is a general treatment that can be applied to a diversity of nanofocusing structures, with results shown here for the specific case of sharp metal wedges. Based on the geometrical optics approximation, it is demonstrated that the coupling efficiency from plasmons generated with a metal grating into the nanofocused symmetric or quasi-symmetric modes may vary between ~50% to ~100% depending on the structural parameters. Optimal conditions for nanofocusing with the view to minimise coupling and dissipative losses are also determined and discussed. It is shown that the temperature near the tip of a metal wedge heated by nanosecond plasmonic pulses can increase by several hundred degrees Celsius. This temperature increase is expected to lead to nonlinear effects, self-influence of the focused plasmon, and ultimately self-destruction of the metal tip. This thesis also investigates a different type of nanofocusing structure which consists of a tapered high-index dielectric layer resting on a metal surface. It is shown that the nanofocusing mechanism that occurs in this structure is somewhat different from other structures that have been considered thus far. For example, the surface plasmon experiences significant backreflection and mode transformation at a cut-off thickness. In addition, the reflected plasmon shows negative refraction properties that have not been observed in other nanofocusing structures considered to date. Plasmon nanoguiding – Guiding surface plasmons using metallic nanostructures is important for the development of highly integrated optical components and circuits which are expected to have a superior performance compared to their electronicbased counterparts. A number of different plasmonic waveguides have been considered over the last decade including the recently considered gap and trench plasmon waveguides. The gap and trench plasmon waveguides have proven to be difficult to fabricate. Therefore, this thesis will propose and analyse four different modified gap and trench plasmon waveguides that are expected to be easier to fabricate, and at the same time acquire improved propagation characteristics of the guided mode. In particular, it is demonstrated that the guided modes are significantly screened by the extended metal at the bottom of the structure. This is important for the design of highly integrated optics as it provides the opportunity to place two waveguides close together without significant cross-talk. This thesis also investigates the use of plasmonic nanowires to construct a Fabry-Pérot resonator/interferometer. It is shown that the resonance effect can be achieved with the appropriate resonator length and gap width. Typical quality factors of the Fabry- Pérot cavity are determined and explained in terms of radiative and dissipative losses. The possibility of using a nanowire resonator for the design of plasmonic filters with close to ~100% transmission is also demonstrated. It is expected that the results obtained in this thesis will play a vital role in the development of high resolution near field microscopy and spectroscopy, new measurement techniques and devices for single molecule detection, highly integrated optical devices, and nanobiotechnology devices for diagnostics of living cells.
Resumo:
Despite an ostensibly technology-driven society, the ability to communicate orally continues to feature as an essential ability for students at school and university, as it is for graduates in the workplace. Pedagogically, one rationale is that the need to develop effective oral communication skills is tied to life-long learning which includes successful participation in future work-related tasks. One tangible way that educators have assessed proficiency in the area of communication is through prepared oral presentations. While much of the literature uses the terms 'oral communication' and 'oral presentation' interchangeably, some writers question the role more formal presentations play in the overall development of oral communication skills. However, such formal speaking tasks continue to be a recognised assessment practice in both the secondary school and academy, and, therefore, worthy of further investigation. Adding to the discussion, this thesis explores the knowledge and skills students bring into the academy from previous educational experiences. It examines some of the teaching and assessment methods used in secondary schools to develop oral communication skills through the use of formal oral presentations. Specifically, it investigates criterion-referenced assessment sheets and how these tools are used as a form of instruction, as well as their role and effectiveness in the evaluation of student ability. The focus is on the student's perspective and includes 12 semi-structured interviews with school students. The purpose of this thesis is to explore key thematics underpinning oral communication and to identify tensions between expectations and practice. While acknowledging the breadth and depth of material available under the heading of 'communication theory', this study specifically draws on an expanded view of the rhetorical tradition to fully interrogate the assumptions supporting the practice of assessing oral presentations. Finally, this thesis recommends reconnecting with an updated understanding of rhetoric as a way of assisting in the development of expressive, articulate and discerning communicators.
Resumo:
Quality oriented management systems and methods have become the dominant business and governance paradigm. From this perspective, satisfying customers’ expectations by supplying reliable, good quality products and services is the key factor for an organization and even government. During recent decades, Statistical Quality Control (SQC) methods have been developed as the technical core of quality management and continuous improvement philosophy and now are being applied widely to improve the quality of products and services in industrial and business sectors. Recently SQC tools, in particular quality control charts, have been used in healthcare surveillance. In some cases, these tools have been modified and developed to better suit the health sector characteristics and needs. It seems that some of the work in the healthcare area has evolved independently of the development of industrial statistical process control methods. Therefore analysing and comparing paradigms and the characteristics of quality control charts and techniques across the different sectors presents some opportunities for transferring knowledge and future development in each sectors. Meanwhile considering capabilities of Bayesian approach particularly Bayesian hierarchical models and computational techniques in which all uncertainty are expressed as a structure of probability, facilitates decision making and cost-effectiveness analyses. Therefore, this research investigates the use of quality improvement cycle in a health vii setting using clinical data from a hospital. The need of clinical data for monitoring purposes is investigated in two aspects. A framework and appropriate tools from the industrial context are proposed and applied to evaluate and improve data quality in available datasets and data flow; then a data capturing algorithm using Bayesian decision making methods is developed to determine economical sample size for statistical analyses within the quality improvement cycle. Following ensuring clinical data quality, some characteristics of control charts in the health context including the necessity of monitoring attribute data and correlated quality characteristics are considered. To this end, multivariate control charts from an industrial context are adapted to monitor radiation delivered to patients undergoing diagnostic coronary angiogram and various risk-adjusted control charts are constructed and investigated in monitoring binary outcomes of clinical interventions as well as postintervention survival time. Meanwhile, adoption of a Bayesian approach is proposed as a new framework in estimation of change point following control chart’s signal. This estimate aims to facilitate root causes efforts in quality improvement cycle since it cuts the search for the potential causes of detected changes to a tighter time-frame prior to the signal. This approach enables us to obtain highly informative estimates for change point parameters since probability distribution based results are obtained. Using Bayesian hierarchical models and Markov chain Monte Carlo computational methods, Bayesian estimators of the time and the magnitude of various change scenarios including step change, linear trend and multiple change in a Poisson process are developed and investigated. The benefits of change point investigation is revisited and promoted in monitoring hospital outcomes where the developed Bayesian estimator reports the true time of the shifts, compared to priori known causes, detected by control charts in monitoring rate of excess usage of blood products and major adverse events during and after cardiac surgery in a local hospital. The development of the Bayesian change point estimators are then followed in a healthcare surveillances for processes in which pre-intervention characteristics of patients are viii affecting the outcomes. In this setting, at first, the Bayesian estimator is extended to capture the patient mix, covariates, through risk models underlying risk-adjusted control charts. Variations of the estimator are developed to estimate the true time of step changes and linear trends in odds ratio of intensive care unit outcomes in a local hospital. Secondly, the Bayesian estimator is extended to identify the time of a shift in mean survival time after a clinical intervention which is being monitored by riskadjusted survival time control charts. In this context, the survival time after a clinical intervention is also affected by patient mix and the survival function is constructed using survival prediction model. The simulation study undertaken in each research component and obtained results highly recommend the developed Bayesian estimators as a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances as well as industrial and business contexts. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The empirical results and simulations indicate that the Bayesian estimators are a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The advantages of the Bayesian approach seen in general context of quality control may also be extended in the industrial and business domains where quality monitoring was initially developed.
Resumo:
The use of mobile devices such as smart phones and tablets in classrooms has been met with mixed sentiments. Some instructors and teachers see them as a distraction and regularly ban their usage. Others who see their potential to enhance learning have started to explore ways to integrate them into their teaching in an attempt to improve student engagement. In this paper we report on a pilot study that forms part of a university-wide project reconceptualising its approach to the student evaluation of learning and teaching. In a progressive decision to embrace mobile technology, the university decided to trial a smart phone app designed for students to check-in to class and leave feedback on the spot. Our preliminary findings from trialling the app indicate that the application establishes a more immediate feedback loop between students and teachers. However, the app’s impact depends on how feedback is shared with students and how the teaching team responds.
Resumo:
In recent times, higher education institutions have paid increasing attention to the views of students to obtain feedback on their experience of learning and teaching through internal surveys. This article reviews research in the field and reports on practices in other Australian universities. Findings demonstrate that while student feedback is valued and used by all Australian universities, survey practices are idiosyncratic and in the majority of cases, questionnaires lack validity and reliability; data are used inadequately or inappropriately; and they offer limited potential for cross-sector benchmarking. The study confirms the need for institutions to develop an overarching framework for evaluation in which a valid, reliable, multidimensional and useful student feedback survey constitutes just one part. Given external expectations and internal requirements to collect feedback from students on their experience of learning and teaching, the pursuit of sound evaluation practices will continue to be of interest at local, national and international levels.
Resumo:
Trivium is a keystream generator for a binary additive synchronous stream cipher. It was selected in the final portfolio for the Profile 2 category of the eSTREAM project. The keystream generator is constructed using bit- based shift registers. In this paper we present an alternate representation of Trivium using word-based shift registers, with a word size of three bits. This representation is useful for determining cycles of internal state values. Under this representation it is clear that the state space can be partitioned into subspaces and that over some of these subspaces the state update function is effectively linear. The role of the initialization process is critical in ensuring the states used for generating keystream are updated nonlinearly at some point, as the state update function alone does not provide this.
Resumo:
Microenterprise development programs (MEPs) have been recognised as a valuable way to help the poor engage in micro-businesses (Green et al., 2006; Vargas, 2000), presenting a way out of poverty (Choudhury et al., 2008; Strier, 2010). Concerns have been raised however, that the benefits of MEPs often don’t reach the extremely poor (Jones et al., 2004; Midgley, 2008; Mosley and Hulme, 1998; Nawaz, 2010; Pritchett, 2006). Balancing reach of these programs with depth is a challenging task. Targeting as many poor people as possible often results in MEPs focusing on the upper or middle poor, overlooking the most challenging group. As such, MEPs have been criticised for mission drift – losing sight of the organisation’s core purpose; assisting those more likely to succeed.
Resumo:
In this paper, we present the outcomes of a project on the exploration of the use of Field Programmable Gate Arrays(FPGAs) as co-processors for scientific computation. We designed a custom circuit for the pipelined solving of multiple tri-diagonal linear systems. The design is well suited for applications that require many independent tri diagonal system solves, such as finite difference methods for solving PDEs or applications utilising cubic spline interpolation. The selected solver algorithm was the Tri Diagonal Matrix Algorithm (TDMA or Thomas Algorithm). Our solver supports user specified precision thought the use of a custom floating point VHDL library supporting addition, subtraction, multiplication and division. The variable precision TDMA solver was tested for correctness in simulation mode. The TDMA pipeline was tested successfully in hardware using a simplified solver model. The details of implementation, the limitations, and future work are also discussed.
Resumo:
If there is one thing performance studies graduates should be good at, it is improvising – play and improvisation are central to the contemporary and cultural performance practices we teach and the methods by which we teach them. Objective, offer, acceptance, advancing, reversing, character, status, manipulation, impression management, relationship management – whether we know them from Keith Johnson’s theatre theories or Erving Goffman’s theatre theories, the processes by which we play out a story, scenario or social situation to our own benefit are familiar. We understand that identity, action, interaction and its personal, aesthetic, professional or political outcomes are unpredictable, and that we need to adapt to changeable and uncertain circumstances to achieve our aims. Intriguingly, though, in a Higher Education environment that increasingly emphasises employability, skills in play, improvisation and self-performance are never cited as critical graduate attributes. Is the ability to play, improve and produce spontaneous new self-performances learned in the academy worth articulating into an ability to play, improvise and product spontaneous new self-performances after graduates leave the academy and move into the role of a performing arts professional in industry? A study of the career paths of our performance studies graduates over the past decade suggests that addressing the challenges they face in moving between academic culture, professional culture, industry and career in terms of improvisation and play principles may be very productive. In articles on performing arts careers, graduates are typically advised to find a market for their work, and develop career self-management, management and marketing skills, together with an ability to find, make and maintain relationships and opportunities for themselves. Transitioning to career is cast as a challenging process, requiring these skills, because performing arts careers do not offer the security, status and stability of other careers. Our data confirms this. In our study, though, we found that strategies commonly used to build the resilience, self-reliance and persistence graduates require – talking about portfolio careers, parallel careers, and portable, transferable or translatable skills, for example – can engender panic as easily as they engender confidence. In this paper, I consider what happens when we re-articulate some of the skills scholars and industry stakeholders argue are critical in allowing graduates to shift successfully from academy to industry in terms of skills like improvisation, play and self-performance that are already familiar, meaningful and much-practiced amongst performance studies graduates.
Resumo:
In this paper, we present the outcomes of a project on the exploration of the use of Field Programmable Gate Arrays (FPGAs) as co-processors for scientific computation. We designed a custom circuit for the pipelined solving of multiple tri-diagonal linear systems. The design is well suited for applications that require many independent tri-diagonal system solves, such as finite difference methods for solving PDEs or applications utilising cubic spline interpolation. The selected solver algorithm was the Tri-Diagonal Matrix Algorithm (TDMA or Thomas Algorithm). Our solver supports user specified precision thought the use of a custom floating point VHDL library supporting addition, subtraction, multiplication and division. The variable precision TDMA solver was tested for correctness in simulation mode. The TDMA pipeline was tested successfully in hardware using a simplified solver model. The details of implementation, the limitations, and future work are also discussed.