672 resultados para task model
Resumo:
In this paper a generic decoupled imaged-based control scheme for calibrated cameras obeying the unified projection model is proposed. The proposed decoupled scheme is based on the surface of object projections onto the unit sphere. Such features are invariant to rotational motions. This allows the control of translational motion independently from the rotational motion. Finally, the proposed results are validated with experiments using a classical perspective camera as well as a fisheye camera mounted on a 6 dofs robot platform.
Resumo:
Multiresolution techniques are being extensively used in signal processing literature. This paper has two parts, in the first part we derive a relationship between the general degradation model (Y=BX+W) at coarse and fine resolutions. In the second part we develop a signal restoration scheme in a multiresolution framework and demonstrate through experiments that the knowledge of the relationship between the degradation model at different resolutions helps in obtaining computationally efficient restoration scheme.
Resumo:
In this paper we present a model for defining and enforcing a fine-grained information flow policy. We describe how the policy can be enforced on a typical computer and present experiments using the proposed model. A key feature of the model is that it allows the expression of rules which detail precisely which information elements are allowed to mix together. For example, the model allows the expression of a policy which forbids a doctor from mixing the personal medical details of the patients. The enforcement mechanisms tracks and records information flows within the system so that dynamic changes to the policy can be made with respect to information elements which may have propagated to different locations in the system.
Resumo:
Theory-of-Mind has been defined as the ability to explain and predict human behaviour by imputing mental states, such as attention, intention, desire, emotion, perception and belief, to the self and others (Astington & Barriault, 2001). Theory-of-Mind study began with Piaget and continued through a tradition of meta-cognitive research projects (Flavell, 2004). A study by Baron-Cohen, Leslie and Frith (1985) of Theory-of-Mind abilities in atypically developing children reported major difficulties experienced by children with autism spectrum disorder (ASD) in imputing mental states to others. Since then, a wide range of follow-up research has been conducted to confirm these results. Traditional Theory-of-Mind research on ASD has been based on an either-or assumption that Theory-of-Mind is something one either possesses or does not. However, this approach fails to take account of how the ASD population themselves experience Theory-of-Mind. This paper suggests an alternative approach, Theory-of-Mind continuum model, to understand the Theory-of-Mind experience of people with ASD. The Theory-of-Mind continuum model will be developed through a comparison of subjective and objective aspects of mind, and phenomenal and psychological concepts of mind. This paper will demonstrate the importance of balancing qualitative and quantitative research methods in investigating the minds of people with ASD. It will enrich our theoretical understanding of Theory-of-Mind, as well as contain methodological implications for further studies in Theory-of-Mind
Resumo:
Broad, early definitions of sustainable development have caused confusion and hesitation among local authorities and planning professionals. This confusion has arisen because loosely defined principles of sustainable development have been employed when setting policies and planning projects, and when gauging the efficiencies of these policies in the light of designated sustainability goals. The question of how this theory-rhetoric-practice gap can be filled is the main focus of this chapter. It examines the triple bottom line approach–one of the sustainability accounting approaches widely employed by governmental organisations–and the applicability of this approach to sustainable urban development. The chapter introduces the ‘Integrated Land Use and Transportation Indexing Model’ that incorporates triple bottom line considerations with environmental impact assessment techniques via a geographic, information systems-based decision support system. This model helps decision-makers in selecting policy options according to their economic, environmental and social impacts. Its main purpose is to provide valuable knowledge about the spatial dimensions of sustainable development, and to provide fine detail outputs on the possible impacts of urban development proposals on sustainability levels. In order to embrace sustainable urban development policy considerations, the model is sensitive to the relationship between urban form, travel patterns and socio-economic attributes. Finally, the model is useful in picturing the holistic state of urban settings in terms of their sustainability levels, and in assessing the degree of compatibility of selected scenarios with the desired sustainable urban future.
Resumo:
Since the formal recognition of practice-led research in the 1990s, many higher research degree candidates in art, design and media have submitted creative works along with an accompanying written document or ‘exegesis’ for examination. Various models for the exegesis have been proposed in university guidelines and academic texts during the past decade, and students and supervisors have experimented with its contents and structure. With a substantial number of exegeses submitted and archived, it has now become possible to move beyond proposition to empirical analysis. In this article we present the findings of a content analysis of a large, local sample of submitted exegeses. We identify the emergence of a persistent pattern in the types of content included as well as overall structure. Besides an introduction and conclusion, this pattern includes three main parts, which can be summarized as situating concepts (conceptual definitions and theories); precedents of practice (traditions and exemplars in the field); and researcher’s creative practice (the creative process, the artifacts produced and their value as research). We argue that this model combines earlier approaches to the exegesis, which oscillated between academic objectivity, by providing a contextual framework for the practice, and personal reflexivity, by providing commentary on the creative practice. But this model is more than simply a hybrid: it provides a dual orientation, which allows the researcher to both situate their creative practice within a trajectory of research and do justice to its personally invested poetics. By performing the important function of connecting the practice and creative work to a wider emergent field, the model helps to support claims for a research contribution to the field. We call it a connective model of exegesis.
Resumo:
Introduction Ovine models are widely used in orthopaedic research. To better understand the impact of orthopaedic procedures computer simulations are necessary. 3D finite element (FE) models of bones allow implant designs to be investigated mechanically, thereby reducing mechanical testing. Hypothesis We present the development and validation of an ovine tibia FE model for use in the analysis of tibia fracture fixation plates. Material & Methods Mechanical testing of the tibia consisted of an offset 3-pt bend test with three repetitions of loading to 350N and return to 50N. Tri-axial stacked strain gauges were applied to the anterior and posterior surfaces of the bone and two rigid bodies – consisting of eight infrared active markers, were attached to the ends of the tibia. Positional measurements were taken with a FARO arm 3D digitiser. The FE model was constructed with both geometry and material properties derived from CT images of the bone. The elasticity-density relationship used for material property determination was validated separately using mechanical testing. This model was then transformed to the same coordinate system as the in vitro mechanical test and loads applied. Results Comparison between the mechanical testing and the FE model showed good correlation in surface strains (difference: anterior 2.3%, posterior 3.2%). Discussion & Conclusion This method of model creation provides a simple method for generating subject specific FE models from CT scans. The use of the CT data set for both the geometry and the material properties ensures a more accurate representation of the specific bone. This is reflected in the similarity of the surface strain results.
Resumo:
Minimizing complexity of group key exchange (GKE) protocols is an important milestone towards their practical deployment. An interesting approach to achieve this goal is to simplify the design of GKE protocols by using generic building blocks. In this paper we investigate the possibility of founding GKE protocols based on a primitive called multi key encapsulation mechanism (mKEM) and describe advantages and limitations of this approach. In particular, we show how to design a one-round GKE protocol which satisfies the classical requirement of authenticated key exchange (AKE) security, yet without forward secrecy. As a result, we obtain the first one-round GKE protocol secure in the standard model. We also conduct our analysis using recent formal models that take into account both outsider and insider attacks as well as the notion of key compromise impersonation resilience (KCIR). In contrast to previous models we show how to model both outsider and insider KCIR within the definition of mutual authentication. Our analysis additionally implies that the insider security compiler by Katz and Shin from ACM CCS 2005 can be used to achieve more than what is shown in the original work, namely both outsider and insider KCIR.
Resumo:
This paper presents a novel study that aims to contribute to understanding the phenomenon of Enterprise Systems (ES) evaluation in Australasian universities. The proposed study addresses known limitations of arguably the most significant dependent variable in the Information System (IS) field - IS Success or IS-Impact. This study adopts the IS-Impact measurement model, reported by Gable et al. (2008), as the primary commencing theory-base and applies research extension strategy described by Berthon et al. (2002); extending both theory and the context. This study employs a longitudinal, multi-method research design, with two interrelated phases – exploratory and confirmatory. The exploratory phase aims to investigate the applicability and sufficiency of the IS-Impact dimensions and measures in the new context. The confirmatory phase will gather quantitative data to statistically validate IS-Impact model as a formative index.
Resumo:
The analysis of investment in the electric power has been the subject of intensive research for many years. The efficient generation and distribution of electrical energy is a difficult task involving the operation of a complex network of facilities, often located over very large geographical regions. Electric power utilities have made use of an enormous range of mathematical models. Some models address time spans which last for a fraction of a second, such as those that deal with lightning strikes on transmission lines while at the other end of the scale there are models which address time horizons consisting of ten or twenty years; these usually involve long range planning issues. This thesis addresses the optimal long term capacity expansion of an interconnected power system. The aim of this study has been to derive a new, long term planning model which recognises the regional differences which exist for energy demand and which are present in the construction and operation of power plant and transmission line equipment. Perhaps the most innovative feature of the new model is the direct inclusion of regional energy demand curves in the nonlinear form. This results in a nonlinear capacity expansion model. After review of the relevant literature, the thesis first develops a model for the optimal operation of a power grid. This model directly incorporates regional demand curves. The model is a nonlinear programming problem containing both integer and continuous variables. A solution algorithm is developed which is based upon a resource decomposition scheme that separates the integer variables from the continuous ones. The decompostion of the operating problem leads to an interactive scheme which employs a mixed integer programming problem, known as the master, to generate trial operating configurations. The optimum operating conditions of each trial configuration is found using a smooth nonlinear programming model. The dual vector recovered from this model is subsequently used by the master to generate the next trial configuration. The solution algorithm progresses until lower and upper bounds converge. A range of numerical experiments are conducted and these experiments are included in the discussion. Using the operating model as a basis, a regional capacity expansion model is then developed. It determines the type, location and capacity of additional power plants and transmission lines, which are required to meet predicted electicity demands. A generalised resource decompostion scheme, similar to that used to solve the operating problem, is employed. The solution algorithm is used to solve a range of test problems and the results of these numerical experiments are reported. Finally, the expansion problem is applied to the Queensland electricity grid in Australia.
Resumo:
The analysis of investment in the electric power has been the subject of intensive research for many years. The efficient generation and distribution of electrical energy is a difficult task involving the operation of a complex network of facilities, often located over very large geographical regions. Electric power utilities have made use of an enormous range of mathematical models. Some models address time spans which last for a fraction of a second, such as those that deal with lightning strikes on transmission lines while at the other end of the scale there are models which address time horizons consisting of ten or twenty years; these usually involve long range planning issues. This thesis addresses the optimal long term capacity expansion of an interconnected power system. The aim of this study has been to derive a new, long term planning model which recognises the regional differences which exist for energy demand and which are present in the construction and operation of power plant and transmission line equipment. Perhaps the most innovative feature of the new model is the direct inclusion of regional energy demand curves in the nonlinear form. This results in a nonlinear capacity expansion model. After review of the relevant literature, the thesis first develops a model for the optimal operation of a power grid. This model directly incorporates regional demand curves. The model is a nonlinear programming problem containing both integer and continuous variables. A solution algorithm is developed which is based upon a resource decomposition scheme that separates the integer variables from the continuous ones. The decompostion of the operating problem leads to an interactive scheme which employs a mixed integer programming problem, known as the master, to generate trial operating configurations. The optimum operating conditions of each trial configuration is found using a smooth nonlinear programming model. The dual vector recovered from this model is subsequently used by the master to generate the next trial configuration. The solution algorithm progresses until lower and upper bounds converge. A range of numerical experiments are conducted and these experiments are included in the discussion. Using the operating model as a basis, a regional capacity expansion model is then developed. It determines the type, location and capacity of additional power plants and transmission lines, which are required to meet predicted electicity demands. A generalised resource decompostion scheme, similar to that used to solve the operating problem, is employed. The solution algorithm is used to solve a range of test problems and the results of these numerical experiments are reported. Finally, the expansion problem is applied to the Queensland electricity grid in Australia
Resumo:
We give a direct construction of a certificateless key encapsulation mechanism (KEM) in the standard model that is more efficient than the generic constructions proposed before by Huang and Wong \cite{DBLP:conf/acisp/HuangW07}. We use a direct construction from Kiltz and Galindo's KEM scheme \cite{DBLP:conf/acisp/KiltzG06} to obtain a certificateless KEM in the standard model; our construction is roughly twice as efficient as the generic construction. We also address the security flaw discovered by Selvi et al. \cite{cryptoeprint:2009:462}.
Resumo:
We show how to construct a certificateless key agreement protocol from the certificateless key encapsulation mechanism introduced by \cite{lippold-ICISC_2009} in ICISC 2009 using the \cite{DBLP:conf/acisp/BoydCNP08} protocol from ACISP 2008. We introduce the Canetti-Krawczyk (CK) model for certificateless cryptography, give security notions for Type I and Type II adversaries in the CK model, and highlight the differences to the existing e$^2$CK model discussed by \cite{DBLP:conf/pairing/LippoldBN09}. The resulting CK model is more relaxed thus giving more power to the adversary than the original CK model.
Resumo:
A mathematical model is developed to simulate the discharge of a LiFePO4 cathode. This model contains 3 size scales, which match with experimental observations present in the literature on the multi-scale nature of LiFePO4 material. A shrinking-core is used on the smallest scale to represent the phase-transition of LiFePO4 during discharge. The model is then validated against existing experimental data and this validated model is then used to investigate parameters that influence active material utilisation. Specifically the size and composition of agglomerates of LiFePO4 crystals is discussed, and we investigate and quantify the relative effects that the ionic and electronic conductivities within the oxide have on oxide utilisation. We find that agglomerates of crystals can be tolerated under low discharge rates. The role of the electrolyte in limiting (cathodic) discharge is also discussed, and we show that electrolyte transport does limit performance at high discharge rates, confirming the conclusions of recent literature.
Resumo:
This paper introduces a novel technique to directly optimise the Figure of Merit (FOM) for phonetic spoken term detection. The FOM is a popular measure of sTD accuracy, making it an ideal candiate for use as an objective function. A simple linear model is introduced to transform the phone log-posterior probabilities output by a phe classifier to produce enhanced log-posterior features that are more suitable for the STD task. Direct optimisation of the FOM is then performed by training the parameters of this model using a non-linear gradient descent algorithm. Substantial FOM improvements of 11% relative are achieved on held-out evaluation data, demonstrating the generalisability of the approach.