898 resultados para QUT Speaker Identity Verification System
Resumo:
The three-component reaction-diffusion system introduced in [C. P. Schenk et al., Phys. Rev. Lett., 78 (1997), pp. 3781–3784] has become a paradigm model in pattern formation. It exhibits a rich variety of dynamics of fronts, pulses, and spots. The front and pulse interactions range in type from weak, in which the localized structures interact only through their exponentially small tails, to strong interactions, in which they annihilate or collide and in which all components are far from equilibrium in the domains between the localized structures. Intermediate to these two extremes sits the semistrong interaction regime, in which the activator component of the front is near equilibrium in the intervals between adjacent fronts but both inhibitor components are far from equilibrium there, and hence their concentration profiles drive the front evolution. In this paper, we focus on dynamically evolving N-front solutions in the semistrong regime. The primary result is use of a renormalization group method to rigorously derive the system of N coupled ODEs that governs the positions of the fronts. The operators associated with the linearization about the N-front solutions have N small eigenvalues, and the N-front solutions may be decomposed into a component in the space spanned by the associated eigenfunctions and a component projected onto the complement of this space. This decomposition is carried out iteratively at a sequence of times. The former projections yield the ODEs for the front positions, while the latter projections are associated with remainders that we show stay small in a suitable norm during each iteration of the renormalization group method. Our results also help extend the application of the renormalization group method from the weak interaction regime for which it was initially developed to the semistrong interaction regime. The second set of results that we present is a detailed analysis of this system of ODEs, providing a classification of the possible front interactions in the cases of $N=1,2,3,4$, as well as how front solutions interact with the stationary pulse solutions studied earlier in [A. Doelman, P. van Heijster, and T. J. Kaper, J. Dynam. Differential Equations, 21 (2009), pp. 73–115; P. van Heijster, A. Doelman, and T. J. Kaper, Phys. D, 237 (2008), pp. 3335–3368]. Moreover, we present some results on the general case of N-front interactions.
Resumo:
In this article, we analyze the three-component reaction-diffusion system originally developed by Schenk et al. (PRL 78:3781–3784, 1997). The system consists of bistable activator-inhibitor equations with an additional inhibitor that diffuses more rapidly than the standard inhibitor (or recovery variable). It has been used by several authors as a prototype three-component system that generates rich pulse dynamics and interactions, and this richness is the main motivation for the analysis we present. We demonstrate the existence of stationary one-pulse and two-pulse solutions, and travelling one-pulse solutions, on the real line, and we determine the parameter regimes in which they exist. Also, for one-pulse solutions, we analyze various bifurcations, including the saddle-node bifurcation in which they are created, as well as the bifurcation from a stationary to a travelling pulse, which we show can be either subcritical or supercritical. For two-pulse solutions, we show that the third component is essential, since the reduced bistable two-component system does not support them. We also analyze the saddle-node bifurcation in which two-pulse solutions are created. The analytical method used to construct all of these pulse solutions is geometric singular perturbation theory, which allows us to show that these solutions lie in the transverse intersections of invariant manifolds in the phase space of the associated six-dimensional travelling wave system. Finally, as we illustrate with numerical simulations, these solutions form the backbone of the rich pulse dynamics this system exhibits, including pulse replication, pulse annihilation, breathing pulses, and pulse scattering, among others.
Resumo:
In this article, we analyze the stability and the associated bifurcations of several types of pulse solutions in a singularly perturbed three-component reaction-diffusion equation that has its origin as a model for gas discharge dynamics. Due to the richness and complexity of the dynamics generated by this model, it has in recent years become a paradigm model for the study of pulse interactions. A mathematical analysis of pulse interactions is based on detailed information on the existence and stability of isolated pulse solutions. The existence of these isolated pulse solutions is established in previous work. Here, the pulse solutions are studied by an Evans function associated to the linearized stability problem. Evans functions for stability problems in singularly perturbed reaction-diffusion models can be decomposed into a fast and a slow component, and their zeroes can be determined explicitly by the NLEP method. In the context of the present model, we have extended the NLEP method so that it can be applied to multi-pulse and multi-front solutions of singularly perturbed reaction-diffusion equations with more than one slow component. The brunt of this article is devoted to the analysis of the stability characteristics and the bifurcations of the pulse solutions. Our methods enable us to obtain explicit, analytical information on the various types of bifurcations, such as saddle-node bifurcations, Hopf bifurcations in which breathing pulse solutions are created, and bifurcations into travelling pulse solutions, which can be both subcritical and supercritical.
Resumo:
We investigate regions of bistability between different travelling and stationary structures in a planar singularly-perturbed three-component reaction-diffusion system that arises in the context of gas discharge systems. In previous work, we delineated the existence and stabil-ity regions of stationary localized spots in this system. Here, we complement this analysis by establishing the stability regions of planar travelling fronts and stationary stripes. Taken together, these results imply that stable fronts and spots can coexist in three-component systems. Numerical simulations indicate that the stable fronts never move towards stable spots but instead move away from them.
Resumo:
New materials technology has provided the potential for the development of an innovative Hybrid Composite Floor Plate System (HCFPS) with many desirable properties, such as light weight, easy to construct, economical, demountable, recyclable and reusable. Component materials of HCFPS include a central Polyurethane (PU) core, outer layers of Glass-fibre Reinforced Cement (GRC) and steel laminates at tensile regions. HCFPS is configured such that the positive inherent properties of individual component materials are combined to offset any weakness and achieve optimum performance. Research has been carried out using extensive Finite Element (FE) computer simulations supported by experimental testing. Both the strength and serviceability requirements have been established for this lightweight floor plate system. This paper presents some of the research towards the development of HCFPS along with a parametric study to select suitable span lengths.
Resumo:
Urban transit system performance may be quantified and assessed using transit capacity and productive capacity for planning, design and operational management. Bunker (4) defines important productive performance measures of an individual transit service and transit line. Transit work (p-km) captures transit task performed over distance. Transit productiveness (p-km/h) captures transit work performed over time. This paper applies productive performance with risk assessment to quantify transit system reliability. Theory is developed to monetize transit segment reliability risk on the basis of demonstration Annual Reliability Event rates by transit facility type, segment productiveness, and unit-event severity. A comparative example of peak hour performance of a transit sub-system containing bus-on-street, busway, and rail components in Brisbane, Australia demonstrates through practical application the importance of valuing reliability. Comparison reveals the highest risk segments to be long, highly productive on street bus segments followed by busway (BRT) segments and then rail segments. A transit reliability risk reduction treatment example demonstrates that benefits can be significant and should be incorporated into project evaluation in addition to those of regular travel time savings, reduced emissions and safety improvements. Reliability can be used to identify high risk components of the transit system and draw comparisons between modes both in planning and operations settings, and value improvement scenarios in a project evaluation setting. The methodology can also be applied to inform daily transit system operational management.
Resumo:
Urban transit system performance may be quantified and assessed using transit capacity and productive capacity for planning, design and operational management. Bunker (4) defines important productive performance measures of an individual transit service and transit line. Transit work (p-km) captures transit task performed over distance. Transit productiveness (p-km/h) captures transit work performed over time. This paper applies productive performance with risk assessment to quantify transit system reliability. Theory is developed to monetize transit segment reliability risk on the basis of demonstration Annual Reliability Event rates by transit facility type, segment productiveness, and unit-event severity. A comparative example of peak hour performance of a transit sub-system containing bus-on-street, busway, and rail components in Brisbane, Australia demonstrates through practical application the importance of valuing reliability. Comparison reveals the highest risk segments to be long, highly productive on street bus segments followed by busway (BRT) segments and then rail segments. A transit reliability risk reduction treatment example demonstrates that benefits can be significant and should be incorporated into project evaluation in addition to those of regular travel time savings, reduced emissions and safety improvements. Reliability can be used to identify high risk components of the transit system and draw comparisons between modes both in planning and operations settings, and value improvement scenarios in a project evaluation setting. The methodology can also be applied to inform daily transit system operational management.
Resumo:
Flood related scientific and community-based data are rarely systematically collected and analysed in the Philippines. Over the last decades the Pagsangaan River Basin, Leyte, has experienced several flood events. However, documentation describing flood characteristics such as extent, duration or height of these floods are close to non-existing. To address this issue, computerized flood modelling was used to reproduce past events where there was data available for at least partial calibration and validation. The model was also used to provide scenario-based predictions based on A1B climate change assumptions for the area. The most important input for flood modelling is a Digital Elevation Model (DEM) of the river basin. No accurate topographic maps or Light Detection And Ranging (LIDAR)-generated data are available for the Pagsangaan River. Therefore, the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global Digital Elevation Map (GDEM), Version 1, was chosen as the DEM. Although the horizontal spatial resolution of 30 m is rather desirable, it contains substantial vertical errors. These were identified, different correction methods were tested and the resulting DEM was used for flood modelling. The above mentioned data were combined with cross-sections at various strategic locations of the river network, meteorological records, river water level, and current velocity to develop the 1D-2D flood model. SOBEK was used as modelling software to create different rainfall scenarios, including historic flooding events. Due to the lack of scientific data for the verification of the model quality, interviews with local stakeholders served as the gauge to judge the quality of the generated flood maps. According to interviewees, the model reflects reality more accurately than previously available flood maps. The resulting flood maps are now used by the operations centre of a local flood early warning system for warnings and evacuation alerts. Furthermore these maps can serve as a basis to identify flood hazard areas for spatial land use planning purposes.
Resumo:
Evaluating the validity of formative variables has presented ongoing challenges for researchers. In this paper we use global criterion measures to compare and critically evaluate two alternative formative measures of System Quality. One model is based on the ISO-9126 software quality standard, and the other is based on a leading information systems research model. We find that despite both models having a strong provenance, many of the items appear to be non-significant in our study. We examine the implications of this by evaluating the quality of the criterion variables we used, and the performance of PLS when evaluating formative models with a large number of items. We find that our respondents had difficulty distinguishing between global criterion variables measuring different aspects of overall System Quality. Also, because formative indicators “compete with one another” in PLS, it may be difficult to develop a set of measures which are all significant for a complex formative construct with a broad scope and a large number of items. Overall, we suggest that there is cautious evidence that both sets of measures are valid and largely equivalent, although questions still remain about the measures, the use of criterion variables, and the use of PLS for this type of model evaluation.
Resumo:
Readily accepted knowledge regarding crash causation is consistently omitted from efforts to model and subsequently understand motor vehicle crash occurrence and their contributing factors. For instance, distracted and impaired driving accounts for a significant proportion of crash occurrence, yet is rarely modeled explicitly. In addition, spatially allocated influences such as local law enforcement efforts, proximity to bars and schools, and roadside chronic distractions (advertising, pedestrians, etc.) play a role in contributing to crash occurrence and yet are routinely absent from crash models. By and large, these well-established omitted effects are simply assumed to contribute to model error, with predominant focus on modeling the engineering and operational effects of transportation facilities (e.g. AADT, number of lanes, speed limits, width of lanes, etc.) The typical analytical approach—with a variety of statistical enhancements—has been to model crashes that occur at system locations as negative binomial (NB) distributed events that arise from a singular, underlying crash generating process. These models and their statistical kin dominate the literature; however, it is argued in this paper that these models fail to capture the underlying complexity of motor vehicle crash causes, and thus thwart deeper insights regarding crash causation and prevention. This paper first describes hypothetical scenarios that collectively illustrate why current models mislead highway safety researchers and engineers. It is argued that current model shortcomings are significant, and will lead to poor decision-making. Exploiting our current state of knowledge of crash causation, crash counts are postulated to arise from three processes: observed network features, unobserved spatial effects, and ‘apparent’ random influences that reflect largely behavioral influences of drivers. It is argued; furthermore, that these three processes in theory can be modeled separately to gain deeper insight into crash causes, and that the model represents a more realistic depiction of reality than the state of practice NB regression. An admittedly imperfect empirical model that mixes three independent crash occurrence processes is shown to outperform the classical NB model. The questioning of current modeling assumptions and implications of the latent mixture model to current practice are the most important contributions of this paper, with an initial but rather vulnerable attempt to model the latent mixtures as a secondary contribution.
Resumo:
Digital Stories are short autobiographical documentaries, often illustrated with personal photographs and narrated in the first person, and typically produced in group workshops. As a media form they offer ‘ordinary people’ the opportunity to represent themselves to audiences of their choosing; and this amplification of hitherto unheard voices has significant repercussions for their social participation. Many of the storytellers involved in the ‘Rainbow Family Tree’ case study that is the subject of this paper can be characterised as ‘everyday’ activists for their common desire to use their personal stories to increase social acceptance of marginalised identity categories. However, in conflict with their willingness to share their personal stories, many fear the risks and ramifications of distributing them in public spaces (especially online) to audiences both intimate and unknown. Additionally, while technologies for production and distribution of rich media products have become more accessible and user-friendly, many obstacles remain. For many people there are difficulties with technological access and aptitude, personal agency, cultural capital, and social isolation, not to mention availability of the time and energy requisite to Digital Storytelling. Additionally, workshop context, facilitation and distribution processes all influence the content of stories. This paper explores the many factors that make ‘authentic’ self-representation far from straight forward. I use qualitative data drawn from interviews, Digital Story texts and ethnographic observation of GLBTQIS participants in a Digital Storytelling initiative that combined face-to-face and online modes of participation. I consider mediating influences in practice and theory and draw on strategies put forth in cultural anthropology and narrative therapy to propose some practical tools for nuanced and sensitive facilitation of Digital Storytelling workshops and webspaces. Finally, I consider the implications of these facilitation strategies for voice, identity and social participation.
Resumo:
The IEEE Subcommittee on the Application of Probability Methods (APM) published the IEEE Reliability Test System (RTS) [1] in 1979. This system provides a consistent and generally acceptable set of data that can be used both in generation capacity and in composite system reliability evaluation [2,3]. The test system provides a basis for the comparison of results obtained by different people using different methods. Prior to its publication, there was no general agreement on either the system or the data that should be used to demonstrate or test various techniques developed to conduct reliability studies. Development of reliability assessment techniques and programs are very dependent on the intent behind the development as the experience of one power utility with their system may be quite different from that of another utility. The development and the utilization of a reliability program are, therefore, greatly influenced by the experience of a utlity and the intent of the system manager, planner and designer conducting the reliability studies. The IEEE-RTS has proved to be extremely valuable in highlighting and comparing the capabilities (or incapabilities) of programs used in reliability studies, the differences in the perception of various power utilities and the differences in the solution techniques. The IEEE-RTS contains a reasonably large power network which can be difficult to use for initial studies in an educational environment.
Resumo:
The IEEE Reliability Test System (RTS) developed by the Application of Probability Method Subcommittee has been used to compare and test a wide range of generating capacity and composite system evaluation techniques and subsequent digital computer programs. A basic reliability test system is presented which has evolved from the reliability education and research programs conducted by the Power System Research Group at the University of Saskatchewan. The basic system data necessary for adequacy evaluation at the generation and composite generation and transmission system levels are presented together with the fundamental data required to conduct reliability-cost/reliability-worth evaluation