919 resultados para space-based lasers
Resumo:
This paper investigates the use of the dimensionality-reduction techniques weighted linear discriminant analysis (WLDA), and weighted median fisher discriminant analysis (WMFD), before probabilistic linear discriminant analysis (PLDA) modeling for the purpose of improving speaker verification performance in the presence of high inter-session variability. Recently it was shown that WLDA techniques can provide improvement over traditional linear discriminant analysis (LDA) for channel compensation in i-vector based speaker verification systems. We show in this paper that the speaker discriminative information that is available in the distance between pair of speakers clustered in the development i-vector space can also be exploited in heavy-tailed PLDA modeling by using the weighted discriminant approaches prior to PLDA modeling. Based upon the results presented within this paper using the NIST 2008 Speaker Recognition Evaluation dataset, we believe that WLDA and WMFD projections before PLDA modeling can provide an improved approach when compared to uncompensated PLDA modeling for i-vector based speaker verification systems.
Resumo:
Recently, some authors have considered a new diffusion model–space and time fractional Bloch-Torrey equation (ST-FBTE). Magin et al. (2008) have derived analytical solutions with fractional order dynamics in space (i.e., _ = 1, β an arbitrary real number, 1 < β ≤ 2) and time (i.e., 0 < α < 1, and β = 2), respectively. Yu et al. (2011) have derived an analytical solution and an effective implicit numerical method for solving ST-FBTEs, and also discussed the stability and convergence of the implicit numerical method. However, due to the computational overheads necessary to perform the simulations for nuclear magnetic resonance (NMR) in three dimensions, they present a study based on a two-dimensional example to confirm their theoretical analysis. Alternating direction implicit (ADI) schemes have been proposed for the numerical simulations of classic differential equations. The ADI schemes will reduce a multidimensional problem to a series of independent one-dimensional problems and are thus computationally efficient. In this paper, we consider the numerical solution of a ST-FBTE on a finite domain. The time and space derivatives in the ST-FBTE are replaced by the Caputo and the sequential Riesz fractional derivatives, respectively. A fractional alternating direction implicit scheme (FADIS) for the ST-FBTE in 3-D is proposed. Stability and convergence properties of the FADIS are discussed. Finally, some numerical results for ST-FBTE are given.
Resumo:
Appearance-based localization is increasingly used for loop closure detection in metric SLAM systems. Since it relies only upon the appearance-based similarity between images from two locations, it can perform loop closure regardless of accumulated metric error. However, the computation time and memory requirements of current appearance-based methods scale linearly not only with the size of the environment but also with the operation time of the platform. These properties impose severe restrictions on longterm autonomy for mobile robots, as loop closure performance will inevitably degrade with increased operation time. We present a set of improvements to the appearance-based SLAM algorithm CAT-SLAM to constrain computation scaling and memory usage with minimal degradation in performance over time. The appearance-based comparison stage is accelerated by exploiting properties of the particle observation update, and nodes in the continuous trajectory map are removed according to minimal information loss criteria. We demonstrate constant time and space loop closure detection in a large urban environment with recall performance exceeding FAB-MAP by a factor of 3 at 100% precision, and investigate the minimum computational and memory requirements for maintaining mapping performance.
Resumo:
Place matters to literacy because the meanings of our language and actions are always materially and socially placed in the world (Scollon & Scollon, 2003). We cannot interpret signs, whether an icon, symbol, gesture, word, or action, without taking into account their associations with other meanings and objects in places. This chapter maps an emergent strand of literacy research that foregrounds place and space as constitutive, rather than a backdrop for the real action. Space and place are seen as relational and dynamic, not as fixed and unchanging. Space and place are socially produced, and hence, can be contested, re-imagined and re-made. In bringing space and place into the frame of literacy studies we see a subtle shift – a rebalancing of the semiotic with the materiality of lived, embodied, and situated experience. ...
Resumo:
A sub optimal resource allocation algorithm for Orthogonal Frequency Division Multiplexing (OFDM) based cooperative scheme is proposed. The system consists of multiple relays. Subcarrier space is divided into blocks and relays participating in cooperation are allocated specific blocks to be used with a user. To ensure unique subcarrier assignment system is constrained such that same block cannot be used by more than one user. Users are given fair block assignments while no restriction for maximum number of blocks a relay can employ is given. Forced cost based decisions [1] are used for block allocation. Simulation results show that this scheme outperforms a non cooperating scheme with sequential allocation with respect to power usage.
Resumo:
We conducted an in-situ X-ray micro-computed tomography heating experiment at the Advanced Photon Source (USA) to dehydrate an unconfined 2.3 mm diameter cylinder of Volterra Gypsum. We used a purpose-built X-ray transparent furnace to heat the sample to 388 K for a total of 310 min to acquire a three-dimensional time-series tomography dataset comprising nine time steps. The voxel size of 2.2 μm3 proved sufficient to pinpoint reaction initiation and the organization of drainage architecture in space and time. We observed that dehydration commences across a narrow front, which propagates from the margins to the centre of the sample in more than four hours. The advance of this front can be fitted with a square-root function, implying that the initiation of the reaction in the sample can be described as a diffusion process. Novel parallelized computer codes allow quantifying the geometry of the porosity and the drainage architecture from the very large tomographic datasets (20483 voxels) in unprecedented detail. We determined position, volume, shape and orientation of each resolvable pore and tracked these properties over the duration of the experiment. We found that the pore-size distribution follows a power law. Pores tend to be anisotropic but rarely crack-shaped and have a preferred orientation, likely controlled by a pre-existing fabric in the sample. With on-going dehydration, pores coalesce into a single interconnected pore cluster that is connected to the surface of the sample cylinder and provides an effective drainage pathway. Our observations can be summarized in a model in which gypsum is stabilized by thermal expansion stresses and locally increased pore fluid pressures until the dehydration front approaches to within about 100 μm. Then, the internal stresses are released and dehydration happens efficiently, resulting in new pore space. Pressure release, the production of pores and the advance of the front are coupled in a feedback loop.
Resumo:
A simulation-based training system for surgical wound debridement was developed and comprises a multimedia introduction, a surgical simulator (tutorial component), and an assessment component. The simulator includes two PCs, a haptic device, and mirrored display. Debridement is performed on a virtual leg model with a shallow laceration wound superimposed. Trainees are instructed to remove debris with forceps, scrub with a brush, and rinse with saline solution to maintain sterility. Research and development issues currently under investigation include tissue deformation models using mass-spring system and finite element methods; tissue cutting using a high-resolution volumetric mesh and dynamic topology; and accurate collision detection, cutting, and soft-body haptic rendering for two devices within the same haptic space.
Resumo:
Flexible design concept is a relatively new trend in airport terminal design which is believed to facilitate the ever changing needs of a terminal. Current architectural design processes become more complex every day because of the introduction of new building technologies where the concept of flexible airport terminal would apparently make the design process even more complex. Previous studies have demonstrated that ever growing aviation industry requires airport terminals to be planned, designed and constructed in such a way that should allow flexibility in design process. In order to adopt the philosophy of ‘design for flexibility’ architects need to address a wide range of differing needs. An appropriate integration of the process models, prior to the airport terminal design process, is expected to uncover the relationships that exist between spatial layout and their corresponding functions. The current paper seeks to develop a way of sharing space adjacency related information obtained from the Business Process Models (BPM) to assist in defining flexible airport terminal layouts. Critical design parameters are briefly investigated at this stage of research whilst reviewing the available design alternatives and an evaluation framework is proposed in the current paper. Information obtained from various design layouts should assist in identifying and defining flexible design matrices allowing architects to interpret and to apply those throughout the lifecycle of the terminal building.
Resumo:
In the era of global knowledge economy, urban regions—seeking to increase their competitive edge, become destinations for talent and investment, and provide prosperity and quality of life to their inhabitants—have little chance achieving their development goals without forming effective knowledge-based urban development strategies. This paper aims to shed light on the planning and development processes of the knowledge-based urban development phenomenon with respect to the construction of knowledge community precincts aimed at making space for knowledge generation and place for knowledge communities. Following to a thorough review of the literature on knowledge-based urban development and strategic asset-based planning, the paper undertakes policy and best practice analyses to learn from the planning and development processes of internationally renowned knowledge community precincts—from Copenhagen, Eindhoven and Singapore. In the light of the analyses findings, this paper scrutinises major Australian knowledge community precinct initiatives—from Sydney, Melbourne and Brisbane—to better understand the dynamics of national practices, and benchmark them against the international best practice cases. The paper concludes with a discussion on the study findings and recommendations for successfully establishing space and place for both knowledge economy and society in Australian cities.
Resumo:
Distributed space-time coding (DSTC) exploits the concept of cooperative diversity and space-time coding to offer a powerful bandwidth efficient solution with improved diversity. In this paper, we evaluate the performance of DSTC with slotted amplify-and-forward protocol (SAF). Relay nodes between the source and the destination nodes are grouped into two relay clusters based on their respective locations and these relay clusters cooperate to transmit the space-time coded signal to the destination node in different time frames. We further extend the proposed Slotted-DSTC to Slotted DSTC with redundant code (Slotted-DSTC-R) protocol where the relay nodes in both relay clusters forward the same space-time coded signal to the destination node to achieve a higher diversity order.
Resumo:
Reliable pollutant build-up prediction plays a critical role in the accuracy of urban stormwater quality modelling outcomes. However, water quality data collection is resource demanding compared to streamflow data monitoring, where a greater quantity of data is generally available. Consequently, available water quality data sets span only relatively short time scales unlike water quantity data. Therefore, the ability to take due consideration of the variability associated with pollutant processes and natural phenomena is constrained. This in turn gives rise to uncertainty in the modelling outcomes as research has shown that pollutant loadings on catchment surfaces and rainfall within an area can vary considerably over space and time scales. Therefore, the assessment of model uncertainty is an essential element of informed decision making in urban stormwater management. This paper presents the application of a range of regression approaches such as ordinary least squares regression, weighted least squares Regression and Bayesian Weighted Least Squares Regression for the estimation of uncertainty associated with pollutant build-up prediction using limited data sets. The study outcomes confirmed that the use of ordinary least squares regression with fixed model inputs and limited observational data may not provide realistic estimates. The stochastic nature of the dependent and independent variables need to be taken into consideration in pollutant build-up prediction. It was found that the use of the Bayesian approach along with the Monte Carlo simulation technique provides a powerful tool, which attempts to make the best use of the available knowledge in the prediction and thereby presents a practical solution to counteract the limitations which are otherwise imposed on water quality modelling.
Resumo:
Internet services are important part of daily activities for most of us. These services come with sophisticated authentication requirements which may not be handled by average Internet users. The management of secure passwords for example creates an extra overhead which is often neglected due to usability reasons. Furthermore, password-based approaches are applicable only for initial logins and do not protect against unlocked workstation attacks. In this paper, we provide a non-intrusive identity verification scheme based on behavior biometrics where keystroke dynamics based-on free-text is used continuously for verifying the identity of a user in real-time. We improved existing keystroke dynamics based verification schemes in four aspects. First, we improve the scalability where we use a constant number of users instead of whole user space to verify the identity of target user. Second, we provide an adaptive user model which enables our solution to take the change of user behavior into consideration in verification decision. Next, we identify a new distance measure which enables us to verify identity of a user with shorter text. Fourth, we decrease the number of false results. Our solution is evaluated on a data set which we have collected from users while they were interacting with their mail-boxes during their daily activities.
Resumo:
Supervision in the creative arts is a topic of growing significance since the increase in creative practice PhDs across universities in Australasia. This presentation will provide context of existing discussions in creative practice and supervision. Creative practice – encompassing practice-based or practice-led research – has now a rich history of research surrounding it. Although it is a comparatively new area of knowledge, great advances have been made in terms of how practice can influence, generate, and become research. The practice of supervision is also a topic of interest, perhaps unsurprisingly considering its necessity within the university environment. Many scholars have written much about supervision practices and the importance of the supervisory role, both in academic and more informal forms. However, there is an obvious space in between: there is very little research on supervision practices within creative practice higher degrees, especially at PhD or doctorate level. Despite the existence of creative practice PhD programs, and thus the inherent necessity for successful supervisors, there remain minimal publications and limited resources available. Creative Intersections explores the existing publications and resources, and illustrates that a space for new published knowledge and tools exists.
Resumo:
This paper investigates advanced channel compensation techniques for the purpose of improving i-vector speaker verification performance in the presence of high intersession variability using the NIST 2008 and 2010 SRE corpora. The performance of four channel compensation techniques: (a) weighted maximum margin criterion (WMMC), (b) source-normalized WMMC (SN-WMMC), (c) weighted linear discriminant analysis (WLDA), and; (d) source-normalized WLDA (SN-WLDA) have been investigated. We show that, by extracting the discriminatory information between pairs of speakers as well as capturing the source variation information in the development i-vector space, the SN-WLDA based cosine similarity scoring (CSS) i-vector system is shown to provide over 20% improvement in EER for NIST 2008 interview and microphone verification and over 10% improvement in EER for NIST 2008 telephone verification, when compared to SN-LDA based CSS i-vector system. Further, score-level fusion techniques are analyzed to combine the best channel compensation approaches, to provide over 8% improvement in DCF over the best single approach, (SN-WLDA), for NIST 2008 interview/ telephone enrolment-verification condition. Finally, we demonstrate that the improvements found in the context of CSS also generalize to state-of-the-art GPLDA with up to 14% relative improvement in EER for NIST SRE 2010 interview and microphone verification and over 7% relative improvement in EER for NIST SRE 2010 telephone verification.
Resumo:
Collections of solid particles from the Earths' stratosphere have been a significant part of atmospheric research programs since 1965 [1], but it has only been in the past decade that space-related disciplines have provided the impetus for a continued interest in these collections. Early research on specific particle types collected from the stratosphere established that interplanetary dust particles (IDP's) can be collected efficiently and in reasonable abundance using flat-plate collectors [2-4]. The tenacity of Brownlee and co-workers in this subfield of cosmochemistry has led to the establishment of a successful IDP collection and analysis program (using flat-plate collectors on high-flying aircraft) based on samples available for distribution from Johnson Space Center [5]. Other stratospheric collections are made, but the program at JSC offers a unique opportunity to study well-documented, individual particles (or groups of particles) from a wide variety of sources [6]. The nature of the collection and curation process, as well as the timeliness of some sampling periods [7], ensures that all data obtained from stratospheric particles is a valuable resource for scientists from a wide range of disciplines. A few examples of the uses of these stratospheric dust collections are outlined below.