871 resultados para temporal compressive sensing ratio design
Resumo:
The field was the design of cross-cultural media art exhibition outcomes for the Japanese marketplace. The context was improved understandings of spatial, temporal and contextual exhibition design procedures as they ultimately impact upon the augmentation of cross-cultural understanding. The research investigated cross-cultural new media exhibition practices suited to the specific sensitivies of Japanese exhibition practices. The methodology was principally practice-led. The research drew upon seven years of prior exhibition design practices in order to generate new Japanese exhibition design methodologies. It also empowered Zaim Artpsace’s Japanese curators to later present a range of substantial new media shows. The project also succeeded in developing new cross-cultural alliances that led to significant IDA projects in Beijing, Australia and Europe in the years 2008-10. Through invitations from external curators the new versions of the exhibition work subsequently travelled to 4 other major venues including the prestigious Songzhang Art Museum, Beijing in 07/08, the Block, QUT, Brisbane and the Tokyo International Film festival. Inspiration Art Press printed a major catalogue for the event extensively featuring this exhibition. This project also led to the Sudamalis (2007) paper, ‘Building Capacity: Literacy And Creative Workforce Development Through International Digital Arts Projects’ (IDAprojects) Exhibition Programs And Partnerships’.
Resumo:
Two decades after its inception, Latent Semantic Analysis(LSA) has become part and parcel of every modern introduction to Information Retrieval. For any tool that matures so quickly, it is important to check its lore and limitations, or else stagnation will set in. We focus here on the three main aspects of LSA that are well accepted, and the gist of which can be summarized as follows: (1) that LSA recovers latent semantic factors underlying the document space, (2) that such can be accomplished through lossy compression of the document space by eliminating lexical noise, and (3) that the latter can best be achieved by Singular Value Decomposition. For each aspect we performed experiments analogous to those reported in the LSA literature and compared the evidence brought to bear in each case. On the negative side, we show that the above claims about LSA are much more limited than commonly believed. Even a simple example may show that LSA does not recover the optimal semantic factors as intended in the pedagogical example used in many LSA publications. Additionally, and remarkably deviating from LSA lore, LSA does not scale up well: the larger the document space, the more unlikely that LSA recovers an optimal set of semantic factors. On the positive side, we describe new algorithms to replace LSA (and more recent alternatives as pLSA, LDA, and kernel methods) by trading its l2 space for an l1 space, thereby guaranteeing an optimal set of semantic factors. These algorithms seem to salvage the spirit of LSA as we think it was initially conceived.
Resumo:
This article explores an important temporal aspect of the design of strategic alliances by focusing on the issue of time bounds specification. Time bounds specification refers to a choice on behalf of prospective alliance partners at the time of alliance formation to either pre-specify the duration of an alliance to a specific time window, or to keep the alliance open-ended (Reuer & Ariňo, 2007). For instance, Das (2006) mentions the example of the alliance between Telemundo Network and Mexican Argos Comunicacion (MAC). Announced in October 2000, this alliance entailed a joint production of 1200 hours of comedy, news, drama, reality and novella programs (Das, 2006). Conditioned on the projected date of completing the 1200 hours of programs, Telemundo Network and MAC pre-specified the time bounds of the alliance ex ante. Such time-bound alliances are said to be particularly prevalent in project-based industries, like movie production, construction, telecommunications and pharmaceuticals (Schwab & Miner, 2008). In many other instances, however, firms may choose to keep their alliances open-ended, not specifying a specific time bound at the time of alliance formation. The choice between designing open-ended alliances that are “built to last”, versus time bound alliances that are “meant to end” is important. Seminal works like Axelrod (1984), Heide & Miner (1992), and Parkhe (1993) demonstrated that the choice to place temporal bounds on a collaborative venture has important implications. More specifically, collaborations that have explicit, short term time bounds (i.e. what is termed a shorter “shadow of the future”) are more likely to experience opportunism (Axelrod, 1984), are more likely to focus on the immediate present (Bakker, Boros, Kenis & Oerlemans, 2012), and are less likely to develop trust (Parkhe, 1993) than alliances for which time bounds are kept indeterminate. These factors, in turn, have been shown to have important implications for the performance of alliances (e.g. Kale, Singh & Perlmutter, 2000). Thus, there seems to be a strong incentive for organizations to form open-ended strategic alliances. And yet, Reuer & Ariňo (2007), one of few empirical studies that details the prevalence of time-bound and open-ended strategic alliances, found that about half (47%) of the alliances in their sample were time bound, the other half were open-ended. What conditions, then, determine this choice?
The backfilled GEI : a cross-capture modality gait feature for frontal and side-view gait recognition
Resumo:
In this paper, we propose a novel direction for gait recognition research by proposing a new capture-modality independent, appearance-based feature which we call the Back-filled Gait Energy Image (BGEI). It can can be constructed from both frontal depth images, as well as the more commonly used side-view silhouettes, allowing the feature to be applied across these two differing capturing systems using the same enrolled database. To evaluate this new feature, a frontally captured depth-based gait dataset was created containing 37 unique subjects, a subset of which also contained sequences captured from the side. The results demonstrate that the BGEI can effectively be used to identify subjects through their gait across these two differing input devices, achieving rank-1 match rate of 100%, in our experiments. We also compare the BGEI against the GEI and GEV in their respective domains, using the CASIA dataset and our depth dataset, showing that it compares favourably against them. The experiments conducted were performed using a sparse representation based classifier with a locally discriminating input feature space, which show significant improvement in performance over other classifiers used in gait recognition literature, achieving state of the art results with the GEI on the CASIA dataset.
Resumo:
Wetlands are the most productive and biologically diverse but very fragile ecosystems. They are vulnerable to even small changes in their biotic and abiotic factors. In recent years, there has been concern over the continuous degradation of wetlands due to unplanned developmental activities. This necessitates inventorying, mapping, and monitoring of wetlands to implement sustainable management approaches. The principal objective of this work is to evolve a strategy to identify and monitor wetlands using temporal remote sensing (RS) data. Pattern classifiers were used to extract wetlands automatically from NIR bands of MODIS, Landsat MSS and Landsat TM remote sensing data. MODIS provided data for 2002 to 2007, while for 1973 and 1992 IR Bands of Landsat MSS and TM (79m and 30m spatial resolution) data were used. Principal components of IR bands of MODIS (250 m) were fused with IRS LISS-3 NIR (23.5 m). To extract wetlands, statistical unsupervised learning of IR bands for the respective temporal data was performed using Bayesian approach based on prior probability, mean and covariance. Temporal analysis of wetlands indicates a sharp decline of 58% in Greater Bangalore attributing to intense urbanization processes, evident from a 466% increase in built-up area from 1973 to 2007.
Resumo:
Urbanisation is the increase in the population of cities in proportion to the region's rural population. Urbanisation in India is very rapid with urban population growing at around 2.3 percent per annum. Urban sprawl refers to the dispersed development along highways or surrounding the city and in rural countryside with implications such as loss of agricultural land, open space and ecologically sensitive habitats. Sprawl is thus a pattern and pace of land use in which the rate of land consumed for urban purposes exceeds the rate of population growth resulting in an inefficient and consumptive use of land and its associated resources. This unprecedented urbanisation trend due to burgeoning population has posed serious challenges to the decision makers in the city planning and management process involving plethora of issues like infrastructure development, traffic congestion, and basic amenities (electricity, water, and sanitation), etc. In this context, to aid the decision makers in following the holistic approaches in the city and urban planning, the pattern, analysis, visualization of urban growth and its impact on natural resources has gained importance. This communication, analyses the urbanisation pattern and trends using temporal remote sensing data based on supervised learning using maximum likelihood estimation of multivariate normal density parameters and Bayesian classification approach. The technique is implemented for Greater Bangalore – one of the fastest growing city in the World, with Landsat data of 1973, 1992 and 2000, IRS LISS-3 data of 1999, 2006 and MODIS data of 2002 and 2007. The study shows that there has been a growth of 466% in urban areas of Greater Bangalore across 35 years (1973 to 2007). The study unravels the pattern of growth in Greater Bangalore and its implication on local climate and also on the natural resources, necessitating appropriate strategies for the sustainable management.
Resumo:
Urbanisation is a dynamic complex phenomenon involving large scale changes in the land uses at local levels. Analyses of changes in land uses in urban environments provide a historical perspective of land use and give an opportunity to assess the spatial patterns, correlation, trends, rate and impacts of the change, which would help in better regional planning and good governance of the region. Main objective of this research is to quantify the urban dynamics using temporal remote sensing data with the help of well-established landscape metrics. Bangalore being one of the rapidly urbanising landscapes in India has been chosen for this investigation. Complex process of urban sprawl was modelled using spatio temporal analysis. Land use analyses show 584% growth in built-up area during the last four decades with the decline of vegetation by 66% and water bodies by 74%. Analyses of the temporal data reveals an increase in urban built up area of 342.83% (during 1973-1992), 129.56% (during 1992-1999), 106.7% (1999-2002), 114.51% (2002-2006) and 126.19% from 2006 to 2010. The Study area was divided into four zones and each zone is further divided into 17 concentric circles of 1 km incrementing radius to understand the patterns and extent of the urbanisation at local levels. The urban density gradient illustrates radial pattern of urbanisation for the period 1973-2010. Bangalore grew radially from 1973 to 2010 indicating that the urbanisation is intensifying from the central core and has reached the periphery of the Greater Bangalore. Shannon's entropy, alpha and beta population densities were computed to understand the level of urbanisation at local levels. Shannon's entropy values of recent time confirms dispersed haphazard urban growth in the city, particularly in the outskirts of the city. This also illustrates the extent of influence of drivers of urbanisation in various directions. Landscape metrics provided in depth knowledge about the sprawl. Principal component analysis helped in prioritizing the metrics for detailed analyses. The results clearly indicates that whole landscape is aggregating to a large patch in 2010 as compared to earlier years which was dominated by several small patches. The large scale conversion of small patches to large single patch can be seen from 2006 to 2010. In the year 2010 patches are maximally aggregated indicating that the city is becoming more compact and more urbanised in recent years. Bangalore was the most sought after destination for its climatic condition and the availability of various facilities (land availability, economy, political factors) compared to other cities. The growth into a single urban patch can be attributed to rapid urbanisation coupled with the industrialisation. Monitoring of growth through landscape metrics helps to maintain and manage the natural resources. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
This paper considers the problem of identifying the footprints of communication of multiple transmitters in a given geographical area. To do this, a number of sensors are deployed at arbitrary but known locations in the area, and their individual decisions regarding the presence or absence of the transmitters' signal are combined at a fusion center to reconstruct the spatial spectral usage map. One straightforward scheme to construct this map is to query each of the sensors and cluster the sensors that detect the primary's signal. However, using the fact that a typical transmitter footprint map is a sparse image, two novel compressive sensing based schemes are proposed, which require significantly fewer number of transmissions compared to the querying scheme. A key feature of the proposed schemes is that the measurement matrix is constructed from a pseudo-random binary phase shift applied to the decision of each sensor prior to transmission. The measurement matrix is thus a binary ensemble which satisfies the restricted isometry property. The number of measurements needed for accurate footprint reconstruction is determined using compressive sampling theory. The three schemes are compared through simulations in terms of a performance measure that quantifies the accuracy of the reconstructed spatial spectral usage map. It is found that the proposed sparse reconstruction technique-based schemes significantly outperform the round-robin scheme.
Resumo:
The key problem tackled in this paper is the development of a stand-alone self-powered sensor to directly sense the spectrum of mechanical vibrations. Such a sensor could be deployed in wide area sensor networks to monitor structural vibrations of large machines (e. g. aircrafts) and initiate corrective action if the structure approaches resonance. In this paper, we study the feasibility of using stretched membranes of polymer piezoelectric polyvinlidene fluoride for low-frequency vibration spectrum sensing. We design and evaluate a low-frequency vibration spectrum sensor that accepts an incoming vibration and directly provides the spectrum of the vibration as the output.
Resumo:
In this paper, we explore fundamental limits on the number of tests required to identify a given number of ``healthy'' items from a large population containing a small number of ``defective'' items, in a nonadaptive group testing framework. Specifically, we derive mutual information-based upper bounds on the number of tests required to identify the required number of healthy items. Our results show that an impressive reduction in the number of tests is achievable compared to the conventional approach of using classical group testing to first identify the defective items and then pick the required number of healthy items from the complement set. For example, to identify L healthy items out of a population of N items containing K defective items, when the tests are reliable, our results show that O(K(L - 1)/(N - K)) measurements are sufficient. In contrast, the conventional approach requires O(K log(N/K)) measurements. We derive our results in a general sparse signal setup, and hence, they are applicable to other sparse signal-based applications such as compressive sensing also.
Resumo:
A joint analysis-synthesis framework is developed for the compressive sensing (CS) recovery of speech signals. The signal is assumed to be sparse in the residual domain with the linear prediction filter used as the sparse transformation. Importantly this transform is not known apriori, since estimating the predictor filter requires the knowledge of the signal. Two prediction filters, one comb filter for pitch and another all pole formant filter are needed to induce maximum sparsity. An iterative method is proposed for the estimation of both the prediction filters and the signal itself. Formant prediction filter is used as the synthesis transform, while the pitch filter is used to model the periodicity in the residual excitation signal, in the analysis mode. Significant improvement in the LLR measure is seen over the previously reported formant filter estimation.
Resumo:
Compressive Sensing theory combines the signal sampling and compression for sparse signals resulting in reduction in sampling rate and computational complexity of the measurement system. In recent years, many recovery algorithms were proposed to reconstruct the signal efficiently. Look Ahead OMP (LAOMP) is a recently proposed method which uses a look ahead strategy and performs significantly better than other greedy methods. In this paper, we propose a modification to the LAOMP algorithm to choose the look ahead parameter L adaptively, thus reducing the complexity of the algorithm, without compromising on the performance. The performance of the algorithm is evaluated through Monte Carlo simulations.
Resumo:
Compressive Sensing (CS) theory combines the signal sampling and compression for sparse signals resulting in reduction in sampling rate. In recent years, many recovery algorithms have been proposed to reconstruct the signal efficiently. Subspace Pursuit and Compressive Sampling Matching Pursuit are some of the popular greedy methods. Also, Fusion of Algorithms for Compressed Sensing is a recently proposed method where several CS reconstruction algorithms participate and the final estimate of the underlying sparse signal is determined by fusing the estimates obtained from the participating algorithms. All these methods involve solving a least squares problem which may be ill-conditioned, especially in the low dimension measurement regime. In this paper, we propose a step prior to least squares to ensure the well-conditioning of the least squares problem. Using Monte Carlo simulations, we show that in low dimension measurement scenario, this modification improves the reconstruction capability of the algorithm in clean as well as noisy measurement cases.
Resumo:
We propose data acquisition from continuous-time signals belonging to the class of real-valued trigonometric polynomials using an event-triggered sampling paradigm. The sampling schemes proposed are: level crossing (LC), close to extrema LC, and extrema sampling. Analysis of robustness of these schemes to jitter, and bandpass additive gaussian noise is presented. In general these sampling schemes will result in non-uniformly spaced sample instants. We address the issue of signal reconstruction from the acquired data-set by imposing structure of sparsity on the signal model to circumvent the problem of gap and density constraints. The recovery performance is contrasted amongst the various schemes and with random sampling scheme. In the proposed approach, both sampling and reconstruction are non-linear operations, and in contrast to random sampling methodologies proposed in compressive sensing these techniques may be implemented in practice with low-power circuitry.
Resumo:
编译器的质量保证对提高软件产品的质量有着重要作用,对编译优化的测试是其中的核心部分.对编译优化的测试需要大量的测试用例程序.要构造这些测试用例,使用传统手工构造方法面临着效率低的问题,而基于文法的构造方法则针对性不足.从对优化的形式化描述出发来自动构造测试用例能克服这些缺点.本文设计并实现了一种基于形式化描述的编译优化测试用例程序生成方法.该方法基于编译优化的时序逻辑描述构造关键顶点控制流图,逐步转换为控制流图并得到用例程序.针对GCC(版本4.1.1)进行的覆盖率测试实验表明,该方法可以生成具有较高针对性的测试用例,并达到相当的覆盖程度.