285 resultados para Free distance
Resumo:
A security system based on the recognition of the iris of human eyes using the wavelet transform is presented. The zero-crossings of the wavelet transform are used to extract the unique features obtained from the grey-level profiles of the iris. The recognition process is performed in two stages. The first stage consists of building a one-dimensional representation of the grey-level profiles of the iris, followed by obtaining the wavelet transform zerocrossings of the resulting representation. The second stage is the matching procedure for iris recognition. The proposed approach uses only a few selected intermediate resolution levels for matching, thus making it computationally efficient as well as less sensitive to noise and quantisation errors. A normalisation process is implemented to compensate for size variations due to the possible changes in the camera-to-face distance. The technique has been tested on real images in both noise-free and noisy conditions. The technique is being investigated for real-time implementation, as a stand-alone system, for access control to high-security areas.
Resumo:
Mesoporous titania microspheres composed of nanosheets with exposed active facets were prepared by hydrothermal synthesis in the presence of hexafluorosilicic acid. They exhibited superior catalytic activity in the solvent-free synthesis of azoxybenzene by oxidation of aniline and could be used for 7 cycles with slight loss of activity.
Resumo:
The use of Mahalanobis squared distance–based novelty detection in statistical damage identification has become increasingly popular in recent years. The merit of the Mahalanobis squared distance–based method is that it is simple and requires low computational effort to enable the use of a higher dimensional damage-sensitive feature, which is generally more sensitive to structural changes. Mahalanobis squared distance–based damage identification is also believed to be one of the most suitable methods for modern sensing systems such as wireless sensors. Although possessing such advantages, this method is rather strict with the input requirement as it assumes the training data to be multivariate normal, which is not always available particularly at an early monitoring stage. As a consequence, it may result in an ill-conditioned training model with erroneous novelty detection and damage identification outcomes. To date, there appears to be no study on how to systematically cope with such practical issues especially in the context of a statistical damage identification problem. To address this need, this article proposes a controlled data generation scheme, which is based upon the Monte Carlo simulation methodology with the addition of several controlling and evaluation tools to assess the condition of output data. By evaluating the convergence of the data condition indices, the proposed scheme is able to determine the optimal setups for the data generation process and subsequently avoid unnecessarily excessive data. The efficacy of this scheme is demonstrated via applications to a benchmark structure data in the field.
Resumo:
In October 2012, Simone presented her book Architecture for a Free Subjectivity to the University of Michigan, Taubman College of Architecture and Urban Planning. This book explores the architectural significance of Deleuze’s philosophy of subjectivization, and Guattari’s overlooked dialogue on architecture and subjectivity. In doing so, it proposes that subjectivity is no longer the exclusive provenance of human beings, but extends to the architectural, the cinematic, the erotic, and the political. It defines a new position within the literature on Deleuze and architecture, while highlighting the neglected issue of subjectivity in contemporary discussion.
Resumo:
A fiber Bragg grating (FBG) accelerometer using transverse forces is more sensitive than one using axial forces with the same mass of the inertial object, because a barely stretched FBG fixed at its two ends is much more sensitive to transverse forces than axial ones. The spring-mass theory, with the assumption that the axial force changes little during the vibration, cannot accurately predict its sensitivity and resonant frequency in the gravitational direction because the assumption does not hold due to the fact that the FBG is barely prestretched. It was modified but still required experimental verification due to the limitations in the original experiments, such as the (1) friction between the inertial object and shell; (2) errors involved in estimating the time-domain records; (3) limited data; and (4) large interval ∼5 Hz between the tested frequencies in the frequency-response experiments. The experiments presented here have verified the modified theory by overcoming those limitations. On the frequency responses, it is observed that the optimal condition for simultaneously achieving high sensitivity and resonant frequency is at the infinitesimal prestretch. On the sensitivity at the same frequency, the experimental sensitivities of the FBG accelerometer with a 5.71 gram inertial object at 6 Hz (1.29, 1.19, 0.88, 0.64, and 0.31 nm/g at the 0.03, 0.69, 1.41, 1.93, and 3.16 nm prestretches, respectively) agree with the static sensitivities predicted (1.25, 1.14, 0.83, 0.61, and 0.29 nm/g, correspondingly). On the resonant frequency, (1) its assumption that the resonant frequencies in the forced and free vibrations are similar is experimentally verified; (2) its dependence on the distance between the FBG’s fixed ends is examined, showing it to be independent; (3) the predictions of the spring-mass theory and modified theory are compared with the experimental results, showing that the modified theory predicts more accurately. The modified theory can be used more confidently in guiding its design by predicting its static sensitivity and resonant frequency, and may have applications in other fields for the scenario where the spring-mass theory fails.
Resumo:
Importance Approximately one-third of patients with peripheral artery disease experience intermittent claudication, with consequent loss of quality of life. Objective To determine the efficacy of ramipril for improving walking ability, patient-perceived walking performance, and quality of life in patients with claudication. Design, Setting, and Patients Randomized, double-blind, placebo-controlled trial conducted among 212 patients with peripheral artery disease (mean age, 65.5 [SD, 6.2] years), initiated in May 2008 and completed in August 2011 and conducted at 3 hospitals in Australia. Intervention Patients were randomized to receive 10 mg/d of ramipril (n = 106) or matching placebo (n = 106) for 24 weeks. Main Outcome Measures Maximum and pain-free walking times were recorded during a standard treadmill test. The Walking Impairment Questionnaire (WIQ) and Short-Form 36 Health Survey (SF-36) were used to assess walking ability and quality of life, respectively. Results At 6 months, relative to placebo, ramipril was associated with a 75-second (95% CI, 60-89 seconds) increase in mean pain-free walking time (P < .001) and a 255-second (95% CI, 215-295 seconds) increase in maximum walking time (P < .001). Relative to placebo, ramipril improved the WIQ median distance score by 13.8 (Hodges-Lehmann 95% CI, 12.2-15.5), speed score by 13.3 (95% CI, 11.9-15.2), and stair climbing score by 25.2 (95% CI, 25.1-29.4) (P < .001 for all). The overall SF-36 median Physical Component Summary score improved by 8.2 (Hodges-Lehmann 95% CI, 3.6-11.4; P = .02) in the ramipril group relative to placebo. Ramipril did not affect the overall SF-36 median Mental Component Summary score. Conclusions and Relevance Among patients with intermittent claudication, 24-week treatment with ramipril resulted in significant increases in pain-free and maximum treadmill walking times compared with placebo. This was associated with a significant increase in the physical functioning component of the SF-36 score. Trial Registration clinicaltrials.gov Identifier: NCT00681226
Resumo:
Whole image descriptors have recently been shown to be remarkably robust to perceptual change especially compared to local features. However, whole-image-based localization systems typically rely on heuristic methods for determining appropriate matching thresholds in a particular environment. These environment-specific tuning requirements and the lack of a meaningful interpretation of these arbitrary thresholds limits the general applicability of these systems. In this paper we present a Bayesian model of probability for whole-image descriptors that can be seamlessly integrated into localization systems designed for probabilistic visual input. We demonstrate this method using CAT-Graph, an appearance-based visual localization system originally designed for a FAB-MAP-style probabilistic input. We show that using whole-image descriptors as visual input extends CAT-Graph’s functionality to environments that experience a greater amount of perceptual change. We also present a method of estimating whole-image probability models in an online manner, removing the need for a prior training phase. We show that this online, automated training method can perform comparably to pre-trained, manually tuned local descriptor methods.
Resumo:
The producer has for many years been a central agent in recording studio sessions; the validation of this role was, in many ways, related to the producer’s physical presence in the studio, to a greater or lesser extent. However, improvements in the speed of digital networks have allowed studio sessions to be produced long-distance, in real-time, through communication programs such as Skype or REDIS. How does this impact on the role of the producer, a “nexus between the creative inspiration of the artist, the technology of the recording studio, and the commercial aspirations of the record company” (Howlett 2012)? From observations of a studio recording session in Lisbon produced through Skype from New York, this article focuses on the role of the producer in these relatively new recording contexts involving long distance media networks. Methodology involved participant observation carried out in Estúdios Namouche in Lisbon (where the session took place), as part of doctoral research. This ethnographic approach also included a number of semi-directed ethnographic interviews of the different actors in this scenario—musicians, recording engineers, composers and producers. As a theoretical framework, the research of De Zutter and Sawyer on Distributed Creativity is used, as the recording studio sets an example of “a cognitive system where […] tasks are not accomplished by separate individuals, but rather through the interactions of those individuals” (DeZutter 2009:4). Therefore, creativity often emerges as a result of this interaction.
Resumo:
A simple but accurate method for measuring the Earth’s radius using a video camera is described. A video camera was used to capture a shadow rising up the wall of a tall building at sunset. A free program called ImageJ was used to measure the time it took the shadow to rise a known distance up the building. The time, distance and length of the sidereal day were used to calculate the radius of the Earth. The radius was measured as 6394.3 +/- 118 km, which is within 1.8% of the accepted average value of 6371 km and well within the experimental error. The experiment is suitable as a high school or university project and should produce a value for Earth’s radius within a few per cent at latitudes towards the equator, where at some times of the year the ecliptic is approximately normal to the horizon.
Resumo:
Purpose. To establish a simple and rapid analytical method, based on direct insertion/electron ionization-mass spectrometry (DI/EI-MS), for measuring free cholesterol in tears from humans and rabbits. Methods. A stable-isotope dilution protocol employing DI/EI-MS in selected ion monitoring mode was developed and validated. It was used to quantify the free cholesterol content in human and rabbit tear extracts. Tears were collected from adult humans (n = 15) and rabbits (n = 10) and lipids extracted. Results. Screening, full-scan (m/z 40-600) DI/EI-MS analysis of crude tear extracts showed that diagnostic ions located in the mass range m/z 350 to 400 were those derived from free cholesterol, with no contribution from cholesterol esters. DI/EI-MS data acquired using selected ion monitoring (SIM) were analyzed for the abundance ratios of diagnostic ions with their stable isotope-labeled analogues arising from the D6-cholesterol internal standard. Standard curves of good linearity were produced and an on-probe limit of detection of 3 ng (at 3:1 signal to noise) and limit of quantification of 8 ng (at 10:1 signal to noise). The concentration of free cholesterol in human tears was 15 ± 6 μg/g, which was higher than in rabbit tears (10 ± 5 μg/g). Conclusions. A stable-isotope dilution DI/EI-SIM method for free cholesterol quantification without prior chromatographic separation was established. Using this method demonstrated that humans have higher free cholesterol levels in their tears than rabbits. This is in agreement with previous reports. This paper provides a rapid and reliable method to measure free cholesterol in small-volume clinical samples. © 2013 The Association for Research in Vision and Ophthalmology, Inc.
Jacobian-free Newton-Krylov methods with GPU acceleration for computing nonlinear ship wave patterns
Resumo:
The nonlinear problem of steady free-surface flow past a submerged source is considered as a case study for three-dimensional ship wave problems. Of particular interest is the distinctive wedge-shaped wave pattern that forms on the surface of the fluid. By reformulating the governing equations with a standard boundary-integral method, we derive a system of nonlinear algebraic equations that enforce a singular integro-differential equation at each midpoint on a two-dimensional mesh. Our contribution is to solve the system of equations with a Jacobian-free Newton-Krylov method together with a banded preconditioner that is carefully constructed with entries taken from the Jacobian of the linearised problem. Further, we are able to utilise graphics processing unit acceleration to significantly increase the grid refinement and decrease the run-time of our solutions in comparison to schemes that are presently employed in the literature. Our approach provides opportunities to explore the nonlinear features of three-dimensional ship wave patterns, such as the shape of steep waves close to their limiting configuration, in a manner that has been possible in the two-dimensional analogue for some time.
Resumo:
In this paper we introduce and discuss the nature of free-play in the context of three open-ended interactive art installation works. We observe the interaction work of situated free-play of the participants in these environments and, building on precedent work, devise a set of sensitising terms derived both from the literature and from what we observe from participants interacting there. These sensitising terms act as guides and are designed to be used by those who experience, evaluate or report on open-ended interactive art. That is, we propose these terms as a common-ground language to be used by participants communicating while in the art work to describe their experience, by researchers in the various stages of research process (observation, coding activity, analysis, reporting, and publication), and by inter-disciplinary researchers working across the fields of HCI and art. This work builds a foundation for understanding the relationship between free-play, open-ended environments, and interactive installations and contributes sensitising terms useful for the HCI community for discussion and analysis of open-ended interactive art works.
Resumo:
The aim of this research is to report initial experimental results and evaluation of a clinician-driven automated method that can address the issue of misdiagnosis from unstructured radiology reports. Timely diagnosis and reporting of patient symptoms in hospital emergency departments (ED) is a critical component of health services delivery. However, due to disperse information resources and vast amounts of manual processing of unstructured information, a point-of-care accurate diagnosis is often difficult. A rule-based method that considers the occurrence of clinician specified keywords related to radiological findings was developed to identify limb abnormalities, such as fractures. A dataset containing 99 narrative reports of radiological findings was sourced from a tertiary hospital. The rule-based method achieved an F-measure of 0.80 and an accuracy of 0.80. While our method achieves promising performance, a number of avenues for improvement were identified using advanced natural language processing (NLP) techniques.
Resumo:
Objective To develop and evaluate machine learning techniques that identify limb fractures and other abnormalities (e.g. dislocations) from radiology reports. Materials and Methods 99 free-text reports of limb radiology examinations were acquired from an Australian public hospital. Two clinicians were employed to identify fractures and abnormalities from the reports; a third senior clinician resolved disagreements. These assessors found that, of the 99 reports, 48 referred to fractures or abnormalities of limb structures. Automated methods were then used to extract features from these reports that could be useful for their automatic classification. The Naive Bayes classification algorithm and two implementations of the support vector machine algorithm were formally evaluated using cross-fold validation over the 99 reports. Result Results show that the Naive Bayes classifier accurately identifies fractures and other abnormalities from the radiology reports. These results were achieved when extracting stemmed token bigram and negation features, as well as using these features in combination with SNOMED CT concepts related to abnormalities and disorders. The latter feature has not been used in previous works that attempted classifying free-text radiology reports. Discussion Automated classification methods have proven effective at identifying fractures and other abnormalities from radiology reports (F-Measure up to 92.31%). Key to the success of these techniques are features such as stemmed token bigrams, negations, and SNOMED CT concepts associated with morphologic abnormalities and disorders. Conclusion This investigation shows early promising results and future work will further validate and strengthen the proposed approaches.