374 resultados para Long feature
Resumo:
Trajectory design for Autonomous Underwater Vehicles (AUVs) is of great importance to the oceanographic research community. Intelligent planning is required to maneuver a vehicle to high-valued locations for data collection. We consider the use of ocean model predictions to determine the locations to be visited by an AUV, which then provides near-real time, in situ measurements back to the model to increase the skill of future predictions. The motion planning problem of steering the vehicle between the computed waypoints is not considered here. Our focus is on the algorithm to determine relevant points of interest for a chosen oceanographic feature. This represents a first approach to an end to end autonomous prediction and tasking system for aquatic, mobile sensor networks. We design a sampling plan and present experimental results with AUV retasking in the Southern California Bight (SCB) off the coast of Los Angeles.
Resumo:
Objective: Because studies of crowding in long-term care settings are lacking, the authors sought to: (1) generate initial estimates of crowding in nursing homes and assisted living facilities; and (2) evaluate two operational approaches to its measurement. ----- ----- Background: Reactions to density and proximity are complex. Greater density intensifies people's reaction to a situation in the direction (positive or negative) that they would react if the situation were to occur under less dense conditions. People with dementia are especially reactive to the environment. ----- ----- Methods: Using a cross-sectional correlational design in nursing homes and assisted living facilities involving 185 participants, multiple observations (N = 6,455) of crowding and other environmental variables were made. Crowding, location, and sound were measured three times per observation; ambiance was measured once. Data analyses consisted of descriptive statistics, t-tests, and one-way analysis of variance. ----- ----- Results: Crowding estimates were higher for nursing homes and in dining and activity rooms. Crowding also varied across settings and locations by time of day. Overall, the interaction of location and time affected crowding significantly (N = 5,559, df [47, 511], F = 105.69, p < .0001); effects were greater within location-by-hour than between location-by-hour, but the effect explained slightly less variance in Long-Term Care Crowding Index (LTC-CI) estimates (47.41%) than location alone. Crowding had small, direct, and highly significant correlations with sound and with the engaging subscale for ambiance; a similar, though inverse, correlation was seen with the soothing subscale for ambiance. ----- ----- Conclusions: Crowding fluctuates consistent with routine activities such as meals in long-term care settings. Furthermore, a relationship between crowding and other physical characteristics of the environment was found. The LTC-CI is likely to be more sensitive than simple people counts when seeking to evaluate the effects of crowding on the behavior of elders-particularly those with dementia-in long-term care settings. aging in place.
Resumo:
The paper provides an assessment of the performance of commercial Real Time Kinematic (RTK) systems over longer than recommended inter-station distances. The experiments were set up to test and analyse solutions from the i-MAX, MAX and VRS systems being operated with three triangle shaped network cells, each having an average inter-station distance of 69km, 118km and 166km. The performance characteristics appraised included initialization success rate, initialization time, RTK position accuracy and availability, ambiguity resolution risk and RTK integrity risk in order to provide a wider perspective of the performance of the testing systems. ----- ----- The results showed that the performances of all network RTK solutions assessed were affected by the increase in the inter-station distances to similar degrees. The MAX solution achieved the highest initialization success rate of 96.6% on average, albeit with a longer initialisation time. Two VRS approaches achieved lower initialization success rate of 80% over the large triangle. In terms of RTK positioning accuracy after successful initialisation, the results indicated a good agreement between the actual error growth in both horizontal and vertical components and the accuracy specified in the RMS and part per million (ppm) values by the manufacturers. ----- ----- Additionally, the VRS approaches performed better than the MAX and i-MAX when being tested under the standard triangle network with a mean inter-station distance of 69km. However as the inter-station distance increases, the network RTK software may fail to generate VRS correction and then may turn to operate in the nearest single-base RTK (or RAW) mode. The position uncertainty reached beyond 2 meters occasionally, showing that the RTK rover software was using an incorrect ambiguity fixed solution to estimate the rover position rather than automatically dropping back to using an ambiguity float solution. Results identified that the risk of incorrectly resolving ambiguities reached 18%, 20%, 13% and 25% for i-MAX, MAX, Leica VRS and Trimble VRS respectively when operating over the large triangle network. Additionally, the Coordinate Quality indicator values given by the Leica GX1230 GG rover receiver tended to be over-optimistic and not functioning well with the identification of incorrectly fixed integer ambiguity solutions. In summary, this independent assessment has identified some problems and failures that can occur in all of the systems tested, especially when being pushed beyond the recommended limits. While such failures are expected, they can offer useful insights into where users should be wary and how manufacturers might improve their products. The results also demonstrate that integrity monitoring of RTK solutions is indeed necessary for precision applications, thus deserving serious attention from researchers and system providers.
Resumo:
Real‐time kinematic (RTK) GPS techniques have been extensively developed for applications including surveying, structural monitoring, and machine automation. Limitations of the existing RTK techniques that hinder their applications for geodynamics purposes are twofold: (1) the achievable RTK accuracy is on the level of a few centimeters and the uncertainty of vertical component is 1.5–2 times worse than those of horizontal components and (2) the RTK position uncertainty grows in proportional to the base‐torover distances. The key limiting factor behind the problems is the significant effect of residual tropospheric errors on the positioning solutions, especially on the highly correlated height component. This paper develops the geometry‐specified troposphere decorrelation strategy to achieve the subcentimeter kinematic positioning accuracy in all three components. The key is to set up a relative zenith tropospheric delay (RZTD) parameter to absorb the residual tropospheric effects and to solve the established model as an ill‐posed problem using the regularization method. In order to compute a reasonable regularization parameter to obtain an optimal regularized solution, the covariance matrix of positional parameters estimated without the RZTD parameter, which is characterized by observation geometry, is used to replace the quadratic matrix of their “true” values. As a result, the regularization parameter is adaptively computed with variation of observation geometry. The experiment results show that new method can efficiently alleviate the model’s ill condition and stabilize the solution from a single data epoch. Compared to the results from the conventional least squares method, the new method can improve the longrange RTK solution precision from several centimeters to the subcentimeter in all components. More significantly, the precision of the height component is even higher. Several geosciences applications that require subcentimeter real‐time solutions can largely benefit from the proposed approach, such as monitoring of earthquakes and large dams in real‐time, high‐precision GPS leveling and refinement of the vertical datum. In addition, the high‐resolution RZTD solutions can contribute to effective recovery of tropospheric slant path delays in order to establish a 4‐D troposphere tomography.
Resumo:
This paper presents a robust stochastic framework for the incorporation of visual observations into conventional estimation, data fusion, navigation and control algorithms. The representation combines Isomap, a non-linear dimensionality reduction algorithm, with expectation maximization, a statistical learning scheme. The joint probability distribution of this representation is computed offline based on existing training data. The training phase of the algorithm results in a nonlinear and non-Gaussian likelihood model of natural features conditioned on the underlying visual states. This generative model can be used online to instantiate likelihoods corresponding to observed visual features in real-time. The instantiated likelihoods are expressed as a Gaussian mixture model and are conveniently integrated within existing non-linear filtering algorithms. Example applications based on real visual data from heterogenous, unstructured environments demonstrate the versatility of the generative models.
Resumo:
This paper presents a robust stochastic model for the incorporation of natural features within data fusion algorithms. The representation combines Isomap, a non-linear manifold learning algorithm, with Expectation Maximization, a statistical learning scheme. The representation is computed offline and results in a non-linear, non-Gaussian likelihood model relating visual observations such as color and texture to the underlying visual states. The likelihood model can be used online to instantiate likelihoods corresponding to observed visual features in real-time. The likelihoods are expressed as a Gaussian Mixture Model so as to permit convenient integration within existing nonlinear filtering algorithms. The resulting compactness of the representation is especially suitable to decentralized sensor networks. Real visual data consisting of natural imagery acquired from an Unmanned Aerial Vehicle is used to demonstrate the versatility of the feature representation.
Resumo:
In a randomized, double-blind study, 202 healthy adults were randomized to receive a live, attenuated Japanese encephalitis chimeric virus vaccine (JE-CV) and placebo 28 days apart in a cross-over design. A subgroup of 98 volunteers received a JE-CV booster at month 6. Safety, immunogenicity, and persistence of antibodies to month 60 were evaluated. There were no unexpected adverse events (AEs) and the incidence of AEs between JE-CV and placebo were similar. There were three serious adverse events (SAE) and no deaths. A moderately severe case of acute viral illness commencing 39 days after placebo administration was the only SAE considered possibly related to immunization. 99% of vaccine recipients achieved a seroprotective antibody titer ≥ 10 to JE-CV 28 days following the single dose of JE-CV, and 97% were seroprotected at month 6. Kaplan Meier analysis showed that after a single dose of JE-CV, 87% of the participants who were seroprotected at month 6 were still protected at month 60. This rate was 96% among those who received a booster immunization at month 6. 95% of subjects developed a neutralizing titer ≥ 10 against at least three of the four strains of a panel of wild-type Japanese encephalitis virus (JEV) strains on day 28 after immunization. At month 60, that proportion was 65% for participants who received a single dose of JE-CV and 75% for the booster group. These results suggest that JE-CV is safe, well tolerated and that a single dose provides long-lasting immunity to wild-type strains
Resumo:
This full day workshop invites participants to consider the nexus where the interests of game design, the expectations of play and HCI meet: the game interface. Game interfaces seem different to the interface to other software and there have been a number of observations. Shneiderman famously noticed that while most software designers are intent on following the tenets of the “invisible computer” and making access easy for the user, games inter-faces are made for players: they embed challenge. Schell discusses a “strange” relationship between the player and the game enabled by the interface and user interface designers frequently opine that much can be learned from the design of game interfaces. So where does the game interface actually sit? Even more interesting is the question as to whether the history of the relationship and sub-sequent expectations are now limiting the potential of game design as an expressive form. Recent innovations in I/O design such as Nintendo’s Wii, Sony’s Move and Microsoft's Kinect seem to usher in an age of physical player-enabled interaction, experience and embodied, engaged design. This workshop intends to cast light on this often mentioned and sporadically examined area and to establish a platform for new and innovative design in the field.
Resumo:
In the past two decades there has been increasing interest in branding tourism destinations in an effort to meaningfully differentiate against a myriad of competing places that offer similar attractions and facilities. The academic literature relating to destination branding commenced only as recently as 1998, and there remains a dearth of empirical data that tests the effectiveness of brand campaigns, particularly in terms of enhancing destination loyalty. This paper reports the results of an investigation into destination brand loyalty for Australia as a long haul destination in a South American market. In spite of the high level of academic interest in the measurement of perceptions of destinations since the 1970s, few previous studies have examined perceptions held by South American consumers. Drawing on a model of consumer-based brand equity (CBBE), antecedents of destination brand loyalty was tested with data from a large Chilean sample of travelers, comprising a mix of previous visitors and non-visitors to Australia. Findings suggest that destination brand awareness, brand image, and brand value are positively related to brand loyalty for a long-haul destination. However, destination brand quality was not significantly related. The results also indicate that Australia is a more compelling destination brand for previous visitors compared to non-visitors.
Resumo:
Hydrogels provide a 3-dimensional network for embedded cells and offer promise for cartilage tissue engineering applications. Nature-derived hydrogels, including alginate, have been shown to enhance the chondrocyte phenotype but are variable and not entirely controllable. Synthetic hydrogels, including polyethylene glycol (PEG)-based matrices, have the advantage of repeatability and modularity; mechanical stiffness, cell adhesion, and degradability can be altered independently. In this study, we compared the long-term in vitro effects of different hydrogels (alginate and Factor XIIIa-cross-linked MMP-sensitive PEG at two stiffness levels) on the behavior of expanded human chondrocytes and the development of construct properties. Monolayer-expanded human chondrocytes remained viable throughout culture, but morphology varied greatly in different hydrogels. Chondrocytes were characteristically round in alginate but mostly spread in PEG gels at both concentrations. Chondrogenic gene (COL2A1, aggrecan) expression increased in all hydrogels, but alginate constructs had much higher expression levels of these genes (up to 90-fold for COL2A1), as well as proteoglycan 4, a functional marker of the superficial zone. Also, chondrocytes expressed COL1A1 and COL10A1, indicative of de-differentiation and hypertrophy. After 12 weeks, constructs with lower polymer content were stiffer than similar constructs with higher polymer content, with the highest compressive modulus measured in 2.5% PEG gels. Different materials and polymer concentrations have markedly different potency to affect chondrocyte behavior. While synthetic hydrogels offer many advantages over natural materials such as alginate, they must be further optimized to elicit desired chondrocyte responses for use as cartilage models and for development of functional tissue-engineered articular cartilage.
Resumo:
Uncooperative iris identification systems at a distance suffer from poor resolution of the captured iris images, which significantly degrades iris recognition performance. Superresolution techniques have been employed to enhance the resolution of iris images and improve the recognition performance. However, all existing super-resolution approaches proposed for the iris biometric super-resolve pixel intensity values. This paper considers transferring super-resolution of iris images from the intensity domain to the feature domain. By directly super-resolving only the features essential for recognition, and by incorporating domain specific information from iris models, improved recognition performance compared to pixel domain super-resolution can be achieved. This is the first paper to investigate the possibility of feature domain super-resolution for iris recognition, and experiments confirm the validity of the proposed approach.
Resumo:
It is a big challenge to guarantee the quality of discovered relevance features in text documents for describing user preferences because of the large number of terms, patterns, and noise. Most existing popular text mining and classification methods have adopted term-based approaches. However, they have all suffered from the problems of polysemy and synonymy. Over the years, people have often held the hypothesis that pattern-based methods should perform better than term-based ones in describing user preferences, but many experiments do not support this hypothesis. The innovative technique presented in paper makes a breakthrough for this difficulty. This technique discovers both positive and negative patterns in text documents as higher level features in order to accurately weight low-level features (terms) based on their specificity and their distributions in the higher level features. Substantial experiments using this technique on Reuters Corpus Volume 1 and TREC topics show that the proposed approach significantly outperforms both the state-of-the-art term-based methods underpinned by Okapi BM25, Rocchio or Support Vector Machine and pattern based methods on precision, recall and F measures.
Resumo:
The journalism revolution is upon us. In a world where we are constantly being told that everyone can be a publisher and challenges are emerging from bloggers, Twitterers and podcasters, journalism educators are inevitably reassessing what skills we now need to teach to keep our graduates ahead of the game. QUT this year tackled that question head-on as a curriculum review and program restructure resulted in a greater emphasis on online journalism. The author spent a week in the online newsrooms of each of two of the major players – ABC online news and thecouriermail.com to watch, listen and interview some of the key players. This, in addition to interviews with industry leaders from Fairfax and news.com, lead to the conclusion that while there are some new skills involved in new media much of what the industry is demanding is in fact good old fashioned journalism. Themes of good spelling, grammar, accuracy and writing skills and a nose for news recurred when industry players were asked what it was that they would like to see in new graduates. While speed was cited as one of the big attributes needed in online journalism, the conclusion of many of the players was that the skills of a good down-table sub or a journalist working for wire service were not unlike those most used in online newsrooms.
Resumo:
Spectrum sensing optimisation techniques maximise the efficiency of spectrum sensing while satisfying a number of constraints. Many optimisation models consider the possibility of the primary user changing activity state during the secondary user's transmission period. However, most ignore the possibility of activity change during the sensing period. The observed primary user signal during sensing can exhibit a duty cycle which has been shown to severely degrade detection performance. This paper shows that (a) the probability of state change during sensing cannot be neglected and (b) the true detection performance obtained when incorporating the duty cycle of the primary user signal can deviate significantly from the results expected with the assumption of no such duty cycle.
Resumo:
Despite many arguments to the contrary, the three-act story structure, as propounded and refined by Hollywood continues to dominate the blockbuster and independent film markets. Recent successes in post-modern cinema could indicate new directions and opportunities for low-budget national cinemas.