188 resultados para Feature types


Relevância:

20.00% 20.00%

Publicador:

Resumo:

While it is commonly accepted that computability on a Turing machine in polynomial time represents a correct formalization of the notion of a feasibly computable function, there is no similar agreement on how to extend this notion on functionals, that is, what functionals should be considered feasible. One possible paradigm was introduced by Mehlhorn, who extended Cobham's definition of feasible functions to type 2 functionals. Subsequently, this class of functionals (with inessential changes of the definition) was studied by Townsend who calls this class POLY, and by Kapron and Cook who call the same class basic feasible functionals. Kapron and Cook gave an oracle Turing machine model characterisation of this class. In this article, we demonstrate that the class of basic feasible functionals has recursion theoretic properties which naturally generalise the corresponding properties of the class of feasible functions, thus giving further evidence that the notion of feasibility of functionals mentioned above is correctly chosen. We also improve the Kapron and Cook result on machine representation.Our proofs are based on essential applications of logic. We introduce a weak fragment of second order arithmetic with second order variables ranging over functions from NN which suitably characterises basic feasible functionals, and show that it is a useful tool for investigating the properties of basic feasible functionals. In particular, we provide an example how one can extract feasible programs from mathematical proofs that use nonfeasible functions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Previous research suggests that soil organic C pools may be a feature of semiarid regions that are particularly sensitive to climatic changes. We instituted an 18-mo experiment along an elevation gradient in northern Arizona to evaluate the influence of temperature, moisture, and soil C pool size on soil respiration. Soils, from underneath different free canopy types and interspaces of three semiarid ecosystems, were moved upslope and/or downslope to modify soil climate. Soils moved downslope experienced increased temperature and decreased precipitation, resulting in decreased soil moisture and soil respiration las much as 23 acid 20%, respectively). Soils moved upslope to more mesic, cooler sites had greater soil water content and increased rates of soil respiration las much as 40%), despite decreased temperature. Soil respiration rates normalized for total C were not significantly different within any of the three incubation sites, indicating that under identical climatic conditions, soil respiration is directly related to soil C pool size for the incubated soils. Normalized soil respiration rates between sites differed significantly for all soil types and were always greater for soils incubated under more mesic, but cooler, conditions. Total soil C did not change significantly during the experiment, but estimates suggest that significant portions of the rapidly cycling C pool were lost. While long-term decreases in aboveground and belowground detrital inputs may ultimately be greater than decreased soil respiration, the initial response to increased temperature and decreased precipitation in these systems is a decrease in annual soil C efflux.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many studies focused on the development of crash prediction models have resulted in aggregate crash prediction models to quantify the safety effects of geometric, traffic, and environmental factors on the expected number of total, fatal, injury, and/or property damage crashes at specific locations. Crash prediction models focused on predicting different crash types, however, have rarely been developed. Crash type models are useful for at least three reasons. The first is motivated by the need to identify sites that are high risk with respect to specific crash types but that may not be revealed through crash totals. Second, countermeasures are likely to affect only a subset of all crashes—usually called target crashes—and so examination of crash types will lead to improved ability to identify effective countermeasures. Finally, there is a priori reason to believe that different crash types (e.g., rear-end, angle, etc.) are associated with road geometry, the environment, and traffic variables in different ways and as a result justify the estimation of individual predictive models. The objectives of this paper are to (1) demonstrate that different crash types are associated to predictor variables in different ways (as theorized) and (2) show that estimation of crash type models may lead to greater insights regarding crash occurrence and countermeasure effectiveness. This paper first describes the estimation results of crash prediction models for angle, head-on, rear-end, sideswipe (same direction and opposite direction), and pedestrian-involved crash types. Serving as a basis for comparison, a crash prediction model is estimated for total crashes. Based on 837 motor vehicle crashes collected on two-lane rural intersections in the state of Georgia, six prediction models are estimated resulting in two Poisson (P) models and four NB (NB) models. The analysis reveals that factors such as the annual average daily traffic, the presence of turning lanes, and the number of driveways have a positive association with each type of crash, whereas median widths and the presence of lighting are negatively associated. For the best fitting models covariates are related to crash types in different ways, suggesting that crash types are associated with different precrash conditions and that modeling total crash frequency may not be helpful for identifying specific countermeasures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Type unions, pointer variables and function pointers are a long standing source of subtle security bugs in C program code. Their use can lead to hard-to-diagnose crashes or exploitable vulnerabilities that allow an attacker to attain privileged access over classified data. This paper describes an automatable framework for detecting such weaknesses in C programs statically, where possible, and for generating assertions that will detect them dynamically, in other cases. Exclusively based on analysis of the source code, it identifies required assertions using a type inference system supported by a custom made symbol table. In our preliminary findings, our type system was able to infer the correct type of unions in different scopes, without manual code annotations or rewriting. Whenever an evaluation is not possible or is difficult to resolve, appropriate runtime assertions are formed and inserted into the source code. The approach is demonstrated via a prototype C analysis tool.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background For CAM to feature prominently in health care decision-making there is a need to expand the evidence-base and to further incorporate economic evaluation into research priorities. In a world of scarce health care resources and an emphasis on efficiency and clinical efficacy, CAM, as indeed do all other treatments, requires rigorous evaluation to be considered in budget decision-making. Methods Economic evaluation provides the tools to measure the costs and health consequences of CAM interventions and thereby inform decision making. This article offers CAM researchers an introductory framework for understanding, undertaking and disseminating economic evaluation. The types of economic evaluation available for the study of CAM are discussed, and decision modelling is introduced as a method for economic evaluation with much potential for use in CAM. Two types of decision models are introduced, decision trees and Markov models, along with a worked example of how each method is used to examine costs and health consequences. This is followed by a discussion of how this information is used by decision makers. Conclusions Undoubtedly, economic evaluation methods form an important part of health care decision making. Without formal training it can seem a daunting task to consider economic evaluation, however, multidisciplinary teams provide an opportunity for health economists, CAM practitioners and other interested researchers, to work together to further develop the economic evaluation of CAM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A good object representation or object descriptor is one of the key issues in object based image analysis. To effectively fuse color and texture as a unified descriptor at object level, this paper presents a novel method for feature fusion. Color histogram and the uniform local binary patterns are extracted from arbitrary-shaped image-objects, and kernel principal component analysis (kernel PCA) is employed to find nonlinear relationships of the extracted color and texture features. The maximum likelihood approach is used to estimate the intrinsic dimensionality, which is then used as a criterion for automatic selection of optimal feature set from the fused feature. The proposed method is evaluated using SVM as the benchmark classifier and is applied to object-based vegetation species classification using high spatial resolution aerial imagery. Experimental results demonstrate that great improvement can be achieved by using proposed feature fusion method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Path planning and trajectory design for autonomous underwater vehicles (AUVs) is of great importance to the oceanographic research community because automated data collection is becoming more prevalent. Intelligent planning is required to maneuver a vehicle to high-valued locations to perform data collection. In this paper, we present algorithms that determine paths for AUVs to track evolving features of interest in the ocean by considering the output of predictive ocean models. While traversing the computed path, the vehicle provides near-real-time, in situ measurements back to the model, with the intent to increase the skill of future predictions in the local region. The results presented here extend prelim- inary developments of the path planning portion of an end-to-end autonomous prediction and tasking system for aquatic, mobile sensor networks. This extension is the incorporation of multiple vehicles to track the centroid and the boundary of the extent of a feature of interest. Similar algorithms to those presented here are under development to consider additional locations for multiple types of features. The primary focus here is on algorithm development utilizing model predictions to assist in solving the motion planning problem of steering an AUV to high-valued locations, with respect to the data desired. We discuss the design technique to generate the paths, present simulation results and provide experimental data from field deployments for tracking dynamic features by use of an AUV in the Southern California coastal ocean.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trajectory design for Autonomous Underwater Vehicles (AUVs) is of great importance to the oceanographic research community. Intelligent planning is required to maneuver a vehicle to high-valued locations for data collection. We consider the use of ocean model predictions to determine the locations to be visited by an AUV, which then provides near-real time, in situ measurements back to the model to increase the skill of future predictions. The motion planning problem of steering the vehicle between the computed waypoints is not considered here. Our focus is on the algorithm to determine relevant points of interest for a chosen oceanographic feature. This represents a first approach to an end to end autonomous prediction and tasking system for aquatic, mobile sensor networks. We design a sampling plan and present experimental results with AUV retasking in the Southern California Bight (SCB) off the coast of Los Angeles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a robust stochastic framework for the incorporation of visual observations into conventional estimation, data fusion, navigation and control algorithms. The representation combines Isomap, a non-linear dimensionality reduction algorithm, with expectation maximization, a statistical learning scheme. The joint probability distribution of this representation is computed offline based on existing training data. The training phase of the algorithm results in a nonlinear and non-Gaussian likelihood model of natural features conditioned on the underlying visual states. This generative model can be used online to instantiate likelihoods corresponding to observed visual features in real-time. The instantiated likelihoods are expressed as a Gaussian mixture model and are conveniently integrated within existing non-linear filtering algorithms. Example applications based on real visual data from heterogenous, unstructured environments demonstrate the versatility of the generative models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a robust stochastic model for the incorporation of natural features within data fusion algorithms. The representation combines Isomap, a non-linear manifold learning algorithm, with Expectation Maximization, a statistical learning scheme. The representation is computed offline and results in a non-linear, non-Gaussian likelihood model relating visual observations such as color and texture to the underlying visual states. The likelihood model can be used online to instantiate likelihoods corresponding to observed visual features in real-time. The likelihoods are expressed as a Gaussian Mixture Model so as to permit convenient integration within existing nonlinear filtering algorithms. The resulting compactness of the representation is especially suitable to decentralized sensor networks. Real visual data consisting of natural imagery acquired from an Unmanned Aerial Vehicle is used to demonstrate the versatility of the feature representation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To date, the majority of films that utilise or feature hip hop music and culture, have either been in the realms of documentary, or in ‘show musicals’ (where the film musical’s device of characters’ bursting into song, is justified by the narrative of a pursuit of a career in the entertainment industry). Thus, most films that feature hip hop expression have in some way been tied to the subject of hip hop. A research interest and enthusiasm was developed for utilising hip hop expression in film in a new way, which would extend the narrative possibilities of hip hop film to wider topics and themes. The creation of the thesis film Out of My Cloud, and the writing of this accompanying exegesis, investigates a research concern of the potential for the use of hip hop expression in an ‘integrated musical’ film (where characters’ break into song without conceit or explanation). Context and rationale for Out of My Cloud (an Australian hip hop ‘integrated musical’ film) is provided in this writing. It is argued that hip hop is particularly suitable for use in a modern narrative film, and particularly in an ‘integrated musical’ film, due to its: current vibrancy and popularity, rap (vocal element of hip hop) music’s focus on lyrical message and meaning, and rap’s use as an everyday, non-performative method of communication. It is also argued that Australian hip hop deserves greater representation in film and literature due to: its current popularity, and its nature as a unique and distinct form of hip hop. To date, representation of Australian hip hop in film and television has almost solely been restricted to the documentary form. Out of My Cloud borrows from elements of social realist cinema such as: contrasts with mainstream cinema, an exploration/recognition of the relationship between environment and development of character, use of non-actors, location-shooting, a political intent of the filmmaker, displaying sympathy for an underclass, representation of underrepresented character types and topics, and a loose narrative structure that does not offer solid resolution. A case is made that it may be appropriate to marry elements of social realist film with hip hop expression due to common characteristics, such as: representation of marginalised or underrepresented groups and issues in society, political objectives of the artist/s, and sympathy for an underclass. In developing and producing Out of My Cloud, a specific method of working with, and filming actor improvisation was developed. This method was informed by improvisation and associated camera techniques of filmmakers such as Charlie Chaplin, Mike Leigh, Khoa Do, Dogme 95 filmmakers, and Lars von Trier (post-Dogme 95). A review of techniques used by these filmmakers is provided in this writing, as well as the impact it has made on my approach. The method utilised in Out of My Cloud was most influenced by Khoa Do’s technique of guiding actors to improvise fairly loosely, but with a predetermined endpoint in mind. A variation of this technique was developed for use in Out of My Cloud, which involved filming with two cameras to allow edits from multiple angles. Specific processes for creating Out of My Cloud are described and explained in this writing. Particular attention is given to the approaches regarding the story elements and the music elements. Various significant aspects of the process are referred to including the filming and recording of live musical performances, the recording of ‘freestyle’ performances (lyrics composed and performed spontaneously) and the creation of a scored musical scene involving a vocal performance without regular timing or rhythm. The documentation of processes in this writing serve to make the successful elements of this film transferable and replicable to other practitioners in the field, whilst flagging missteps to allow fellow practitioners to avoid similar missteps in future projects. While Out of My Cloud is not without its shortcomings as a short film work (for example in the areas of story and camerawork) it provides a significant contribution to the field as a working example of how hip hop may be utilised in an ‘integrated musical’ film, as well as being a rare example of a narrative film that features Australian hip hop. This film and the accompanying exegesis provide insights that contribute to an understanding of techniques, theories and knowledge in the field of filmmaking practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Occlusion is a big challenge for facial expression recognition (FER) in real-world situations. Previous FER efforts to address occlusion suffer from loss of appearance features and are largely limited to a few occlusion types and single testing strategy. This paper presents a robust approach for FER in occluded images and addresses these issues. A set of Gabor based templates is extracted from images in the gallery using a Monte Carlo algorithm. These templates are converted into distance features using template matching. The resulting feature vectors are robust to occlusion. Occluded eyes and mouth regions and randomly places occlusion patches are used for testing. Two testing strategies analyze the effects of these occlusions on the overall recognition performance as well as each facial expression. Experimental results on the Cohn-Kanade database confirm the high robustness of our approach and provide useful insights about the effects of occlusion on FER. Performance is also compared with previous approaches.