882 resultados para Unitary limits
Resumo:
Plant biosecurity requires statistical tools to interpret field surveillance data in order to manage pest incursions that threaten crop production and trade. Ultimately, management decisions need to be based on the probability that an area is infested or free of a pest. Current informal approaches to delimiting pest extent rely upon expert ecological interpretation of presence / absence data over space and time. Hierarchical Bayesian models provide a cohesive statistical framework that can formally integrate the available information on both pest ecology and data. The overarching method involves constructing an observation model for the surveillance data, conditional on the hidden extent of the pest and uncertain detection sensitivity. The extent of the pest is then modelled as a dynamic invasion process that includes uncertainty in ecological parameters. Modelling approaches to assimilate this information are explored through case studies on spiralling whitefly, Aleurodicus dispersus and red banded mango caterpillar, Deanolis sublimbalis. Markov chain Monte Carlo simulation is used to estimate the probable extent of pests, given the observation and process model conditioned by surveillance data. Statistical methods, based on time-to-event models, are developed to apply hierarchical Bayesian models to early detection programs and to demonstrate area freedom from pests. The value of early detection surveillance programs is demonstrated through an application to interpret surveillance data for exotic plant pests with uncertain spread rates. The model suggests that typical early detection programs provide a moderate reduction in the probability of an area being infested but a dramatic reduction in the expected area of incursions at a given time. Estimates of spiralling whitefly extent are examined at local, district and state-wide scales. The local model estimates the rate of natural spread and the influence of host architecture, host suitability and inspector efficiency. These parameter estimates can support the development of robust surveillance programs. Hierarchical Bayesian models for the human-mediated spread of spiralling whitefly are developed for the colonisation of discrete cells connected by a modified gravity model. By estimating dispersal parameters, the model can be used to predict the extent of the pest over time. An extended model predicts the climate restricted distribution of the pest in Queensland. These novel human-mediated movement models are well suited to demonstrating area freedom at coarse spatio-temporal scales. At finer scales, and in the presence of ecological complexity, exploratory models are developed to investigate the capacity for surveillance information to estimate the extent of red banded mango caterpillar. It is apparent that excessive uncertainty about observation and ecological parameters can impose limits on inference at the scales required for effective management of response programs. The thesis contributes novel statistical approaches to estimating the extent of pests and develops applications to assist decision-making across a range of plant biosecurity surveillance activities. Hierarchical Bayesian modelling is demonstrated as both a useful analytical tool for estimating pest extent and a natural investigative paradigm for developing and focussing biosecurity programs.
Resumo:
Proposed transmission smart grids will use a digital platform for the automation of substations operating at voltage levels of 110 kV and above. The IEC 61850 series of standards, released in parts over the last ten years, provide a specification for substation communications networks and systems. These standards, along with IEEE Std 1588-2008 Precision Time Protocol version 2 (PTPv2) for precision timing, are recommended by the both IEC Smart Grid Strategy Group and the NIST Framework and Roadmap for Smart Grid Interoperability Standards for substation automation. IEC 61850-8-1 and IEC 61850-9-2 provide an inter-operable solution to support multi-vendor digital process bus solutions, allowing for the removal of potentially lethal voltages and damaging currents from substation control rooms, a reduction in the amount of cabling required in substations, and facilitates the adoption of non-conventional instrument transformers (NCITs). IEC 61850, PTPv2 and Ethernet are three complementary protocol families that together define the future of sampled value digital process connections for smart substation automation. This paper describes a specific test and evaluation system that uses real time simulation, protection relays, PTPv2 time clocks and artificial network impairment that is being used to investigate technical impediments to the adoption of SV process bus systems by transmission utilities. Knowing the limits of a digital process bus, especially when sampled values and NCITs are included, will enable utilities to make informed decisions regarding the adoption of this technology.
Resumo:
Freeways are divided roadways designed to facilitate the uninterrupted movement of motor vehicles. However, many freeways now experience demand flows in excess of capacity, leading to recurrent congestion. The Highway Capacity Manual (TRB, 1994) uses empirical macroscopic relationships between speed, flow and density to quantify freeway operations and performance. Capacity may be predicted as the maximum uncongested flow achievable. Although they are effective tools for design and analysis, macroscopic models lack an understanding of the nature of processes taking place in the system. Szwed and Smith (1972, 1974) and Makigami and Matsuo (1990) have shown that microscopic modelling is also applicable to freeway operations. Such models facilitate an understanding of the processes whilst providing for the assessment of performance, through measures of capacity and delay. However, these models are limited to only a few circumstances. The aim of this study was to produce more comprehensive and practical microscopic models. These models were required to accurately portray the mechanisms of freeway operations at the specific locations under consideration. The models needed to be able to be calibrated using data acquired at these locations. The output of the models needed to be able to be validated with data acquired at these sites. Therefore, the outputs should be truly descriptive of the performance of the facility. A theoretical basis needed to underlie the form of these models, rather than empiricism, which is the case for the macroscopic models currently used. And the models needed to be adaptable to variable operating conditions, so that they may be applied, where possible, to other similar systems and facilities. It was not possible to produce a stand-alone model which is applicable to all facilities and locations, in this single study, however the scene has been set for the application of the models to a much broader range of operating conditions. Opportunities for further development of the models were identified, and procedures provided for the calibration and validation of the models to a wide range of conditions. The models developed, do however, have limitations in their applicability. Only uncongested operations were studied and represented. Driver behaviour in Brisbane was applied to the models. Different mechanisms are likely in other locations due to variability in road rules and driving cultures. Not all manoeuvres evident were modelled. Some unusual manoeuvres were considered unwarranted to model. However the models developed contain the principal processes of freeway operations, merging and lane changing. Gap acceptance theory was applied to these critical operations to assess freeway performance. Gap acceptance theory was found to be applicable to merging, however the major stream, the kerb lane traffic, exercises only a limited priority over the minor stream, the on-ramp traffic. Theory was established to account for this activity. Kerb lane drivers were also found to change to the median lane where possible, to assist coincident mergers. The net limited priority model accounts for this by predicting a reduced major stream flow rate, which excludes lane changers. Cowan's M3 model as calibrated for both streams. On-ramp and total upstream flow are required as input. Relationships between proportion of headways greater than 1 s and flow differed for on-ramps where traffic leaves signalised intersections and unsignalised intersections. Constant departure onramp metering was also modelled. Minimum follow-on times of 1 to 1.2 s were calibrated. Critical gaps were shown to lie between the minimum follow-on time, and the sum of the minimum follow-on time and the 1 s minimum headway. Limited priority capacity and other boundary relationships were established by Troutbeck (1995). The minimum average minor stream delay and corresponding proportion of drivers delayed were quantified theoretically in this study. A simulation model was constructed to predict intermediate minor and major stream delays across all minor and major stream flows. Pseudo-empirical relationships were established to predict average delays. Major stream average delays are limited to 0.5 s, insignificant compared with minor stream delay, which reach infinity at capacity. Minor stream delays were shown to be less when unsignalised intersections are located upstream of on-ramps than signalised intersections, and less still when ramp metering is installed. Smaller delays correspond to improved merge area performance. A more tangible performance measure, the distribution of distances required to merge, was established by including design speeds. This distribution can be measured to validate the model. Merging probabilities can be predicted for given taper lengths, a most useful performance measure. This model was also shown to be applicable to lane changing. Tolerable limits to merging probabilities require calibration. From these, practical capacities can be estimated. Further calibration is required of traffic inputs, critical gap and minimum follow-on time, for both merging and lane changing. A general relationship to predict proportion of drivers delayed requires development. These models can then be used to complement existing macroscopic models to assess performance, and provide further insight into the nature of operations.
Resumo:
This chapter considers how teachers can make a difference to the kinds of literacy young people take up. Increasingly, researchers and policy-makers see literacy as an ensemble of socio-cultural situated practices rather than as a unitary skill. Accordingly, the differences in what young people come to do with literacy, in and out of school, confront us more directly. If literacy development involves assembling dynamic repertoires of practices, it is crucial to consider what different groups of children growing up and going to school in different places have access to and make investments in over time; the kinds of literate communities from which some are excluded or included; and how educators make a difference to the kinds of literate trajectories and identities young people put together.
Resumo:
It is predicted that with increased life expectancy in the developed world, there will be a greater demand for synthetic materials to repair or regenerate lost, injured or diseased bone (Hench & Thompson 2010). There are still few synthetic materials having true bone inductivity, which limits their application for bone regeneration, especially in large-size bone defects. To solve this problem, growth factors, such as bone morphogenetic proteins (BMPs), have been incorporated into synthetic materials in order to stimulate de novo bone formation in the center of large-size bone defects. The greatest obstacle with this approach is that the rapid diffusion of the protein from the carrier material, leading to a precipitous loss of bioactivity; the result is often insufficient local induction or failure of bone regeneration (Wei et al. 2007). It is critical that the protein is loaded in the carrier material in conditions which maintains its bioactivity (van de Manakker et al. 2009). For this reason, the efficient loading and controlled release of a protein from a synthetic material has remained a significant challenge. The use of microspheres as protein/drug carriers has received considerable attention in recent years (Lee et al. 2010; Pareta & Edirisinghe 2006; Wu & Zreiqat 2010). Compared to macroporous block scaffolds, the chief advantage of microspheres is their superior protein-delivery properties and ability to fill bone defects with irregular and complex shapes and sizes. Upon implantation, the microspheres are easily conformed to the irregular implant site, and the interstices between the particles provide space for both tissue and vascular ingrowth, which are important for effective and functional bone regeneration (Hsu et al. 1999). Alginates are natural polysaccharides and their production does not have the implicit risk of contamination with allo or xeno-proteins or viruses (Xie et al. 2010). Because alginate is generally cytocompatible, it has been used extensively in medicine, including cell therapy and tissue engineering applications (Tampieri et al. 2005; Xie et al. 2010; Xu et al. 2007). Calcium cross-linked alginate hydrogel is considered a promising material as a delivery matrix for drugs and proteins, since its gel microspheres form readily in aqueous solutions at room temperature, eliminating the need for harsh organic solvents, thereby maintaining the bioactivity of proteins in the process of loading into the microspheres (Jay & Saltzman 2009; Kikuchi et al. 1999). In addition, calcium cross-linked alginate hydrogel is degradable under physiological conditions (Kibat PG et al. 1990; Park K et al. 1993), which makes alginate stand out as an attractive candidate material for the protein carrier and bone regeneration (Hosoya et al. 2004; Matsuno et al. 2008; Turco et al. 2009). However, the major disadvantages of alginate microspheres is their low loading efficiency and also rapid release of proteins due to the mesh-like networks of the gel (Halder et al. 2005). Previous studies have shown that a core-shell structure in drug/protein carriers can overcome the issues of limited loading efficiencies and rapid release of drug or protein (Chang et al. 2010; Molvinger et al. 2004; Soppimath et al. 2007). We therefore hypothesized that introducing a core-shell structure into the alginate microspheres could solve the shortcomings of the pure alginate. Calcium silicate (CS) has been tested as a biodegradable biomaterial for bone tissue regeneration. CS is capable of inducing bone-like apatite formation in simulated body fluid (SBF) and its apatite-formation rate in SBF is faster than that of Bioglass® and A-W glass-ceramics (De Aza et al. 2000; Siriphannon et al. 2002). Titanium alloys plasma-spray coated with CS have excellent in vivo bioactivity (Xue et al. 2005) and porous CS scaffolds have enhanced in vivo bone formation ability compared to porous β-tricalcium phosphate ceramics (Xu et al. 2008). In light of the many advantages of this material, we decided to prepare CS/alginate composite microspheres by combining a CS shell with an alginate core to improve their protein delivery and mineralization for potential protein delivery and bone repair applications
Resumo:
In vector space based approaches to natural language processing, similarity is commonly measured by taking the angle between two vectors representing words or documents in a semantic space. This is natural from a mathematical point of view, as the angle between unit vectors is, up to constant scaling, the only unitarily invariant metric on the unit sphere. However, similarity judgement tasks reveal that human subjects fail to produce data which satisfies the symmetry and triangle inequality requirements for a metric space. A possible conclusion, reached in particular by Tversky et al., is that some of the most basic assumptions of geometric models are unwarranted in the case of psychological similarity, a result which would impose strong limits on the validity and applicability vector space based (and hence also quantum inspired) approaches to the modelling of cognitive processes. This paper proposes a resolution to this fundamental criticism of of the applicability of vector space models of cognition. We argue that pairs of words imply a context which in turn induces a point of view, allowing a subject to estimate semantic similarity. Context is here introduced as a point of view vector (POVV) and the expected similarity is derived as a measure over the POVV's. Different pairs of words will invoke different contexts and different POVV's. Hence the triangle inequality ceases to be a valid constraint on the angles. We test the proposal on a few triples of words and outline further research.
Resumo:
The proposition underpinning this study is engaging in meaningful dialogue with previous visitors represents an efficient and effective use of resources for a destination marketing organization (DMO), compared to above the line advertising in broadcast media. However there has been a lack of attention in the tourism literature relating to destination switching, loyalty and customer relationship management (CRM) to test such a proposition. This paper reports an investigation of visitor relationship marketing (VRM) orientation among DMOs. A model of CRM orientation, which was developed from the wider marketing literature and a prior qualitative study, was used to develop a scale to operationalise DMO visitor relationship orientation. Due to a small sample, the Partial Least Squares (PLS) method of structural equation modelling was used to analyse the data. Although the sample limits the ability to generalise, the results indicated the DMOs’ visitor orientation is generally responsive and reactive rather than proactive.
Resumo:
The Full Federal Court has once again been called upon to explore the limits of s51AA of the Trade Practices Act 1974 (Cth) in the context of a retail tenancy between commercially experienced parties. The decision is Australian Competition and Consumer Commission v Samton Holdings Pty Ltd [2002] FCA 62.
Resumo:
Section 126 of the Land Title Act 1994 (Qld) regulates whether, and if so, when a caveat will lapse. While certain caveats will not lapse due to the operation of s 126(1), if a caveator does not wish a caveat to which the section applies to lapse, the caveator must start a proceeding in a court of competent jurisdiction to establish the interest claimed under the caveat within the time limits specified in, and otherwise comply with the obligations imposed by, s 126(4). The requirement, in s 126(4), to “start a proceeding” was the subject of judicial examination by the Court of Appeal (McMurdo P, Holmes JA and MacKenzie J) in Cousins Securities Pty Ltd v CEC Group Ltd [2007] QCA 192.
Resumo:
Background In order to provide insights into the complex biochemical processes inside a cell, modelling approaches must find a balance between achieving an adequate representation of the physical phenomena and keeping the associated computational cost within reasonable limits. This issue is particularly stressed when spatial inhomogeneities have a significant effect on system's behaviour. In such cases, a spatially-resolved stochastic method can better portray the biological reality, but the corresponding computer simulations can in turn be prohibitively expensive. Results We present a method that incorporates spatial information by means of tailored, probability distributed time-delays. These distributions can be directly obtained by single in silico or a suitable set of in vitro experiments and are subsequently fed into a delay stochastic simulation algorithm (DSSA), achieving a good compromise between computational costs and a much more accurate representation of spatial processes such as molecular diffusion and translocation between cell compartments. Additionally, we present a novel alternative approach based on delay differential equations (DDE) that can be used in scenarios of high molecular concentrations and low noise propagation. Conclusions Our proposed methodologies accurately capture and incorporate certain spatial processes into temporal stochastic and deterministic simulations, increasing their accuracy at low computational costs. This is of particular importance given that time spans of cellular processes are generally larger (possibly by several orders of magnitude) than those achievable by current spatially-resolved stochastic simulators. Hence, our methodology allows users to explore cellular scenarios under the effects of diffusion and stochasticity in time spans that were, until now, simply unfeasible. Our methodologies are supported by theoretical considerations on the different modelling regimes, i.e. spatial vs. delay-temporal, as indicated by the corresponding Master Equations and presented elsewhere.
Resumo:
Workflow nets, a particular class of Petri nets, have become one of the standard ways to model and analyze workflows. Typically, they are used as an abstraction of the workflow that is used to check the so-called soundness property. This property guarantees the absence of livelocks, deadlocks, and other anomalies that can be detected without domain knowledge. Several authors have proposed alternative notions of soundness and have suggested to use more expressive languages, e.g., models with cancellations or priorities. This paper provides an overview of the different notions of soundness and investigates these in the presence of different extensions of workflow nets.We will show that the eight soundness notions described in the literature are decidable for workflow nets. However, most extensions will make all of these notions undecidable. These new results show the theoretical limits of workflow verification. Moreover, we discuss some of the analysis approaches described in the literature.
Resumo:
Interpersonal factors are crucial to a deepened understanding of depression. Belongingness, also referred to as connectedness, has been established as a strong risk/protective factor for depressive symptoms. To elucidate this link it may be beneficial to investigate the relative importance of specific psychosocial contexts as belongingness foci. Here we investigate the construct of workplace belongingness. Employees at a disability services organisation (N = 125) completed measures of depressive symptoms, anxiety symptoms, workplace belongingness and organisational commitment. Psychometric analyses, including Horn's parallel analyses, indicate that workplace belongingness is a unitary, robust and measurable construct. Correlational data indicate a substantial relationship with depressive symptoms (r = −.54) and anxiety symptoms (r = −.39). The difference between these correlations was statistically significant, supporting the particular importance of belongingness cognitions to the etiology of depression. Multiple regression analyses support the hypothesis that workplace belongingness mediates the relationship between affective organisational commitment and depressive symptoms. It is likely that workplaces have the potential to foster environments that are intrinsically less depressogenic by facilitating workplace belongingness. From a clinical perspective, cognitions regarding the workplace psychosocial context appear to be highly salient to individual psychological health, and hence warrant substantial attention.
Resumo:
There have been notable advances in learning to control complex robotic systems using methods such as Locally Weighted Regression (LWR). In this paper we explore some potential limits of LWR for robotic applications, particularly investigating its application to systems with a long horizon of temporal dependence. We define the horizon of temporal dependence as the delay from a control input to a desired change in output. LWR alone cannot be used in a temporally dependent system to find meaningful control values from only the current state variables and output, as the relationship between the input and the current state is under-constrained. By introducing a receding horizon of the future output states of the system, we show that sufficient constraint is applied to learn good solutions through LWR. The new method, Receding Horizon Locally Weighted Regression (RH-LWR), is demonstrated through one-shot learning on a real Series Elastic Actuator controlling a pendulum.
Resumo:
To date, consumer behaviour research is still over-focused on the functional rather than the dysfunctional. Both empirical and anecdotal evidence suggest that service organisations are burdened with the concept of consumer sovereignty, while consumers freely flout the ‘rules’ of social exchange and behave in deviant and dysfunctional ways. Further, the current scope of consumer misbehaviour research suggests that the phenomenon has principally been studied in the context of economically-focused exchange. This limits our current understanding of consumer misbehaviour to service encounters that are more transactional than relational in nature. Consequently, this thesis takes a Social Exchange approach to consumer misbehaviour and reports a three-stage multi-method study that examined the nature and antecedents of consumer misbehaviour in professional services. It addresses the following broad research question: What is the nature of consumer misbehaviour during professional service encounters? Study One initially explored the nature of consumer misbehaviour in professional service encounters using critical incident technique (CIT) within 38 semi-structured in-depth interviews. The study was designed to develop a better understanding of what constitutes consumer misbehaviour from a service provider’s perspective. Once the nature of consumer misbehaviour had been qualified, Study Two focused on developing and refining calibrated items that formed Guttman-like scales for two consumer misbehaviour constructs: one for the most theoretically-central type of consumer misbehaviour identified in Study One (i.e. refusal to participate) and one for the most well-theorised and salient type of consumer misbehaviour (i.e. verbal abuse) identified in Study One to afford a comparison. This study used Rasch modelling to investigate whether it was possible to calibrate the escalating severity of a series of decontextualised behavioural descriptors in a valid and reliable manner. Creating scales of calibrated items that capture the variation in severity of different types of consumer misbehaviour identified in Study One allowed for a more valid and reliable investigation of the antecedents of such behaviour. Lastly, Study Three utilised an experimental design to investigate three key antecedents of consumer misbehaviour: (1) the perceived quality of the service encounter [drawn from Fullerton and Punj’s (1993) model of aberrant consumer behaviour], (2) the violation of consumers’ perceptions of justice and equity [drawn from Rousseau’s (1989) Psychological Contract Theory], and (3) consumers’ affective responses to exchange [drawn from Weiss and Cropanzano’s (1996) Affective Events Theory]. Investigating three key antecedents of consumer misbehaviour confirmed the newly-developed understanding of the nature of consumer misbehaviour during professional service encounters. Combined, the results of the three studies suggest that consumer misbehaviour is characteristically different within professional services. The most salient and theoretically-central behaviours can be measured using increasingly severe decontextualised behavioural descriptors. Further, increasingly severe forms of consumer misbehaviour are likely to occur as a response to consumer anger at low levels of interpersonal service quality. These findings have a range of key implications for both marketing theory and practice.
Resumo:
Isoindoline nitroxides are potentially useful probes for viable biological systems, exhibiting low cytotoxicity, moderate rates of biological reduction and favorable Electron Paramagnetic Resonance (EPR) characteristics. We have evaluated the anionic (5-carboxy-1,1,3,3-tetramethylisoindolin-2-yloxyl; CTMIO), cationic (5-(N,N,N-trimethylammonio)-1,1,3,3-tetramethylisoindolin-2-yloxyl iodide, QATMIO) and neutral (1,1,3,3-tetramethylisoindolin-2-yloxyl; TMIO) nitroxides and their isotopically labeled analogs ((2)H(12)- and/or (2)H(12)-(15)N-labeled) as potential EPR oximetry probes. An active ester analogue of CTMIO, designed to localize intracellularly, and the azaphenalene nitroxide 1,1,3,3-tetramethyl-2,3-dihydro-2-azaphenalen-2-yloxyl (TMAO) were also studied. While the EPR spectra of the unlabeled nitroxides exhibit high sensitivity to O(2) concentration, deuteration resulted in a loss of superhyperfine features and a subsequent reduction in O(2) sensitivity. Labeling the nitroxides with (15)N increased the signal intensity and this may be useful in decreasing the detection limits for in vivo measurements. The active ester nitroxide showed approximately 6% intracellular localization and low cytotoxicity. The EPR spectra of TMAO nitroxide indicated an increased rigidity in the nitroxide ring, due to dibenzo-annulation.