993 resultados para Armington Assumption


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The over representation of novice drivers in crashes is alarming. Research indicates that one in five drivers’ crashes within their first year of driving. Driver training is one of the interventions aimed at decreasing the number of crashes that involve young drivers. Currently, there is a need to develop comprehensive driver evaluation system that benefits from the advances in Driver Assistance Systems. Since driving is dependent on fuzzy inputs from the driver (i.e. approximate distance calculation from the other vehicles, approximate assumption of the other vehicle speed), it is necessary that the evaluation system is based on criteria and rules that handles uncertain and fuzzy characteristics of the drive. This paper presents a system that evaluates the data stream acquired from multiple in-vehicle sensors (acquired from Driver Vehicle Environment-DVE) using fuzzy rules and classifies the driving manoeuvres (i.e. overtake, lane change and turn) as low risk or high risk. The fuzzy rules use parameters such as following distance, frequency of mirror checks, gaze depth and scan area, distance with respect to lanes and excessive acceleration or braking during the manoeuvre to assess risk. The fuzzy rules to estimate risk are designed after analysing the selected driving manoeuvres performed by driver trainers. This paper focuses mainly on the difference in gaze pattern for experienced and novice drivers during the selected manoeuvres. Using this system, trainers of novice drivers would be able to empirically evaluate and give feedback to the novice drivers regarding their driving behaviour.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Theory-of-Mind has been defined as the ability to explain and predict human behaviour by imputing mental states, such as attention, intention, desire, emotion, perception and belief, to the self and others (Astington & Barriault, 2001). Theory-of-Mind study began with Piaget and continued through a tradition of meta-cognitive research projects (Flavell, 2004). A study by Baron-Cohen, Leslie and Frith (1985) of Theory-of-Mind abilities in atypically developing children reported major difficulties experienced by children with autism spectrum disorder (ASD) in imputing mental states to others. Since then, a wide range of follow-up research has been conducted to confirm these results. Traditional Theory-of-Mind research on ASD has been based on an either-or assumption that Theory-of-Mind is something one either possesses or does not. However, this approach fails to take account of how the ASD population themselves experience Theory-of-Mind. This paper suggests an alternative approach, Theory-of-Mind continuum model, to understand the Theory-of-Mind experience of people with ASD. The Theory-of-Mind continuum model will be developed through a comparison of subjective and objective aspects of mind, and phenomenal and psychological concepts of mind. This paper will demonstrate the importance of balancing qualitative and quantitative research methods in investigating the minds of people with ASD. It will enrich our theoretical understanding of Theory-of-Mind, as well as contain methodological implications for further studies in Theory-of-Mind

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fishers are faced with multiple risks, including unpredictability of future catch rates, prices and costs. While the latter are largely beyond the control of fisheries managers, effective fisheries management should reduce uncertainty about future catches. Different management instruments are likely to have different impacts on the risk perception of fishers, and this should manifest itself in their implicit discount rate. Assuming licence and quota values represent the net present value of the flow of expected future profits, then a proxy for the implicit discount rate of vessels in a fishery can be derived by the ratio of the average level of profits to the average licence/quota value. From this, an indication of the risk perception can be derived, assuming higher discount rates reflect higher levels of systematic risk. In this paper, we apply the capital asset pricing model (CAPM) to determine the risk premium implicit in the discount rates for a range of Australian fisheries, and compare this with the set of management instruments in place. We test the assumption that rights based management instruments lower perceptions of risk in fisheries. We find little evidence to support this assumption. although the analysis was based on only limited data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spectrum sensing is considered to be one of the most important tasks in cognitive radio. Many sensing detectors have been proposed in the literature, with the common assumption that the primary user is either fully present or completely absent within the window of observation. In reality, there are scenarios where the primary user signal only occupies a fraction of the observed window. This paper aims to analyse the effect of the primary user duty cycle on spectrum sensing performance through the analysis of a few common detectors. Simulations show that the probability of detection degrades severely with reduced duty cycle regardless of the detection method. Furthermore we show that reducing the duty cycle has a greater degradation on performance than lowering the signal strength.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Authorised users (insiders) are behind the majority of security incidents with high financial impacts. Because authorisation is the process of controlling users’ access to resources, improving authorisation techniques may mitigate the insider threat. Current approaches to authorisation suffer from the assumption that users will (can) not depart from the expected behaviour implicit in the authorisation policy. In reality however, users can and do depart from the canonical behaviour. This paper argues that the conflict of interest between insiders and authorisation mechanisms is analogous to the subset of problems formally studied in the field of game theory. It proposes a game theoretic authorisation model that can ensure users’ potential misuse of a resource is explicitly considered while making an authorisation decision. The resulting authorisation model is dynamic in the sense that its access decisions vary according to the changes in explicit factors that influence the cost of misuse for both the authorisation mechanism and the insider.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years, multilevel converters are becoming more popular and attractive than traditional converters in high voltage and high power applications. Multilevel converters are particularly suitable for harmonic reduction in high power applications where semiconductor devices are not able to operate at high switching frequencies or in high voltage applications where multilevel converters reduce the need to connect devices in series to achieve high switch voltage ratings. This thesis investigated two aspects of multilevel converters: structure and control. The first part of this thesis focuses on inductance between a DC supply and inverter components in order to minimise loop inductance, which causes overvoltages and stored energy losses during switching. Three dimensional finite element simulations and experimental tests have been carried out for all sections to verify theoretical developments. The major contributions of this section of the thesis are as follows: The use of a large area thin conductor sheet with a rectangular cross section separated by dielectric sheets (planar busbar) instead of circular cross section wires, contributes to a reduction of the stray inductance. A number of approximate equations exist for calculating the inductance of a rectangular conductor but an assumption was made that the current density was uniform throughout the conductors. This assumption is not valid for an inverter with a point injection of current. A mathematical analysis of a planar bus bar has been performed at low and high frequencies and the inductance and the resistance values between the two points of the planar busbar have been determined. A new physical structure for a voltage source inverter with symmetrical planar bus bar structure called Reduced Layer Planar Bus bar, is proposed in this thesis based on the current point injection theory. This new type of planar busbar minimises the variation in stray inductance for different switching states. The reduced layer planar busbar is a new innovation in planar busbars for high power inverters with minimum separation between busbars, optimum stray inductance and improved thermal performances. This type of the planar busbar is suitable for high power inverters, where the voltage source is supported by several capacitors in parallel in order to provide a low ripple DC voltage during operation. A two layer planar busbar with different materials has been analysed theoretically in order to determine the resistance of bus bars during switching. Increasing the resistance of the planar busbar can gain a damping ratio between stray inductance and capacitance and affects the performance of current loop during switching. The aim of this section is to increase the resistance of the planar bus bar at high frequencies (during switching) and without significantly increasing the planar busbar resistance at low frequency (50 Hz) using the skin effect. This contribution shows a novel structure of busbar suitable for high power applications where high resistance is required at switching times. In multilevel converters there are different loop inductances between busbars and power switches associated with different switching states. The aim of this research is to consider all combinations of the switching states for each multilevel converter topology and identify the loop inductance for each switching state. Results show that the physical layout of the busbars is very important for minimisation of the loop inductance at each switch state. Novel symmetrical busbar structures are proposed for multilevel converters with diode-clamp and flying-capacitor topologies which minimise the worst case in stray inductance for different switching states. Overshoot voltages and thermal problems are considered for each topology to optimise the planar busbar structure. In the second part of the thesis, closed loop current techniques have been investigated for single and three phase multilevel converters. The aims of this section are to investigate and propose suitable current controllers such as hysteresis and predictive techniques for multilevel converters with low harmonic distortion and switching losses. This section of the thesis can be classified into three parts as follows: An optimum space vector modulation technique for a three-phase voltage source inverter based on a minimum-loss strategy is proposed. One of the degrees of freedom for optimisation of the space vector modulation is the selection of the zero vectors in the switching sequence. This new method improves switching transitions per cycle for a given level of distortion as the zero vector does not alternate between each sector. The harmonic spectrum and weighted total harmonic distortion for these strategies are compared and results show up to 7% weighted total harmonic distortion improvement over the previous minimum-loss strategy. The concept of SVM technique is a very convenient representation of a set of three-phase voltages or currents used for current control techniques. A new hysteresis current control technique for a single-phase multilevel converter with flying-capacitor topology is developed. This technique is based on magnitude and time errors to optimise the level change of converter output voltage. This method also considers how to improve unbalanced voltages of capacitors using voltage vectors in order to minimise switching losses. Logic controls require handling a large number of switches and a Programmable Logic Device (PLD) is a natural implementation for state transition description. The simulation and experimental results describe and verify the current control technique for the converter. A novel predictive current control technique is proposed for a three-phase multilevel converter, which controls the capacitors' voltage and load current with minimum current ripple and switching losses. The advantage of this contribution is that the technique can be applied to more voltage levels without significantly changing the control circuit. The three-phase five-level inverter with a pure inductive load has been implemented to track three-phase reference currents using analogue circuits and a programmable logic device.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Continuum mechanics provides a mathematical framework for modelling the physical stresses experienced by a material. Recent studies show that physical stresses play an important role in a wide variety of biological processes, including dermal wound healing, soft tissue growth and morphogenesis. Thus, continuum mechanics is a useful mathematical tool for modelling a range of biological phenomena. Unfortunately, classical continuum mechanics is of limited use in biomechanical problems. As cells refashion the �bres that make up a soft tissue, they sometimes alter the tissue's fundamental mechanical structure. Advanced mathematical techniques are needed in order to accurately describe this sort of biological `plasticity'. A number of such techniques have been proposed by previous researchers. However, models that incorporate biological plasticity tend to be very complicated. Furthermore, these models are often di�cult to apply and/or interpret, making them of limited practical use. One alternative approach is to ignore biological plasticity and use classical continuum mechanics. For example, most mechanochemical models of dermal wound healing assume that the skin behaves as a linear viscoelastic solid. Our analysis indicates that this assumption leads to physically unrealistic results. In this thesis we present a novel and practical approach to modelling biological plasticity. Our principal aim is to combine the simplicity of classical linear models with the sophistication of plasticity theory. To achieve this, we perform a careful mathematical analysis of the concept of a `zero stress state'. This leads us to a formal de�nition of strain that is appropriate for materials that undergo internal remodelling. Next, we consider the evolution of the zero stress state over time. We develop a novel theory of `morphoelasticity' that can be used to describe how the zero stress state changes in response to growth and remodelling. Importantly, our work yields an intuitive and internally consistent way of modelling anisotropic growth. Furthermore, we are able to use our theory of morphoelasticity to develop evolution equations for elastic strain. We also present some applications of our theory. For example, we show that morphoelasticity can be used to obtain a constitutive law for a Maxwell viscoelastic uid that is valid at large deformation gradients. Similarly, we analyse a morphoelastic model of the stress-dependent growth of a tumour spheroid. This work leads to the prediction that a tumour spheroid will always be in a state of radial compression and circumferential tension. Finally, we conclude by presenting a novel mechanochemical model of dermal wound healing that takes into account the plasticity of the healing skin.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The task addressed in this thesis is the automatic alignment of an ensemble of misaligned images in an unsupervised manner. This application is especially useful in computer vision applications where annotations of the shape of an object of interest present in a collection of images is required. Performing this task manually is a slow, tedious, expensive and error prone process which hinders the progress of research laboratories and businesses. Most recently, the unsupervised removal of geometric variation present in a collection of images has been referred to as congealing based on the seminal work of Learned-Miller [21]. The only assumption made in congealing is that the parametric nature of the misalignment is known a priori (e.g. translation, similarity, a�ne, etc) and that the object of interest is guaranteed to be present in each image. The capability to congeal an ensemble of misaligned images stemming from the same object class has numerous applications in object recognition, detection and tracking. This thesis concerns itself with the construction of a congealing algorithm titled, least-squares congealing, which is inspired by the well known image to image alignment algorithm developed by Lucas and Kanade [24]. The algorithm is shown to have superior performance characteristics when compared to previously established methods: canonical congealing by Learned-Miller [21] and stochastic congealing by Z�ollei [39].

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data breach notification laws require organisations to notify affected persons or regulatory authorities when an unauthorised acquisition of personal data occurs. Most laws provide a safe harbour to this obligation if acquired data has been encrypted. There are three types of safe harbour: an exemption; a rebuttable presumption and factor-based analysis. We demonstrate, using three condition-based scenarios, that the broad formulation of most encryption safe harbours is based on the flawed assumption that encryption is the silver bullet for personal information protection. We then contend that reliance upon an encryption safe harbour should be dependent upon a rigorous and competent risk-based review that is required on a case-by-case basis. Finally, we recommend the use of both an encryption safe harbour and a notification trigger as our preferred choice for a data breach notification regulatory framework.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Microorganisms play key roles in biogeochemical cycling by facilitating the release of nutrients from organic compounds. In doing so, microbial communities use different organic substrates that yield different amounts of energy for maintenance and growth of the community. Carbon utilization efficiency (CUE) is a measure of the efficiency with which substrate carbon is metabolized versus mineralized by the microbial biomass. In the face of global change, we wanted to know how temperature affected the efficiency by which the soil microbial community utilized an added labile substrate, and to determine the effect of labile soil carbon depletion (through increasing duration of incubation) on the community's ability to respond to an added substrate. Cellobiose was added to soil samples as a model compound at several times over the course of a long-term incubation experiment to measure the amount of carbon assimilated or lost as CO2 respiration. Results indicated that in all cases, the time required for the microbial community to take up the added substrate increased as incubation time prior to substrate addition increased. However, the CUE was not affected by incubation time. Increased temperature generally decreased CUE, thus the microbial community was more efficient at 15 degrees C than at 25 degrees C. These results indicate that at warmer temperatures microbial communities may release more CO2 per unit of assimilated carbon. Current climate-carbon models have a fixed CUE to predict how much CO2 will be released as soil organic matter is decomposed. Based on our findings, this assumption may be incorrect due to variation of CUE with changing temperature. (c) 2008 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Separability is a concept that is very difficult to define, and yet much of our scientific method is implicitly based upon the assumption that systems can sensibly be reduced to a set of interacting components. This paper examines the notion of separability in the creation of bi-ambiguous compounds that is based upon the CHSH and CH inequalities. It reports results of an experiment showing that violations of the CHSH and CH inequality can occur in human conceptual combination.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Norman K. Denzin (1989) claims that the central assumption of the biographical method—that a life can be captured and represented in a text—is open to question. This paper explores Denzin’s statement by documenting the role of creative writers in re-presenting oral histories in two case studies from Queensland, Australia. The first, The Queensland Business Leaders Hall of Fame, was a commercial research project commissioned by the State Library of Queensland (SLQ) in 2009, and involved semi-formal qualitative interviews and digital stories. The second is an on-going practice-led PhD project, The Artful Life: Oral History and Fiction, which investigates the fictionalisation of oral histories. Both projects enter into a dialogue around the re-presentation of oral and life histories, with attention given to the critical scholarship and creative practice in the process. Creative writers represent a life having particular preoccupations with techniques that more closely align with fiction than non-fiction (Hirsch and Dixon 2008). In this context, oral history resources are viewed not so much as repositories of historical facts, but as ambiguous and fluid narrative sources. The comparison of the two case studies also demonstrates that the aims of a particular project dictate the nature of the re-presentation, revealing that writing about another’s life is a complex act of artful ‘shaping’. Alistair Thomson (2007) notes the growing interdisciplinary nature of oral history scholarship since the 1980s; oral histories are used increasingly in art-based contexts to produce diverse cultural artefacts, such as digital stories and works of fiction, which are very different from traditional histories. What are the methodological implications of such projects? This paper will draw on self-reflexive practice to explore this question.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years the development and use of crash prediction models for roadway safety analyses have received substantial attention. These models, also known as safety performance functions (SPFs), relate the expected crash frequency of roadway elements (intersections, road segments, on-ramps) to traffic volumes and other geometric and operational characteristics. A commonly practiced approach for applying intersection SPFs is to assume that crash types occur in fixed proportions (e.g., rear-end crashes make up 20% of crashes, angle crashes 35%, and so forth) and then apply these fixed proportions to crash totals to estimate crash frequencies by type. As demonstrated in this paper, such a practice makes questionable assumptions and results in considerable error in estimating crash proportions. Through the use of rudimentary SPFs based solely on the annual average daily traffic (AADT) of major and minor roads, the homogeneity-in-proportions assumption is shown not to hold across AADT, because crash proportions vary as a function of both major and minor road AADT. For example, with minor road AADT of 400 vehicles per day, the proportion of intersecting-direction crashes decreases from about 50% with 2,000 major road AADT to about 15% with 82,000 AADT. Same-direction crashes increase from about 15% to 55% for the same comparison. The homogeneity-in-proportions assumption should be abandoned, and crash type models should be used to predict crash frequency by crash type. SPFs that use additional geometric variables would only exacerbate the problem quantified here. Comparison of models for different crash types using additional geometric variables remains the subject of future research.