922 resultados para Voice Digital Processing
Resumo:
The along-track stereo images of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) sensor with 15 m resolution were used to generate Digital Elevation Model (DEM) on an area with low and near Mean Sea Level (MSL) elevation in Johor, Malaysia. The absolute DEM was generated by using the Rational Polynomial Coefficient (RPC) model which was run on ENVI 4.8 software. In order to generate the absolute DEM, 60 Ground Control Pointes (GCPs) with almost vertical accuracy less than 10 meter extracted from topographic map of the study area. The assessment was carried out on uncorrected and corrected DEM by utilizing dozens of Independent Check Points (ICPs). Consequently, the uncorrected DEM showed the RMSEz of ± 26.43 meter which was decreased to the RMSEz of ± 16.49 meter for the corrected DEM after post-processing. Overall, the corrected DEM of ASTER stereo images met the expectations.
Resumo:
This study aims to assess the accuracy of Digital Elevation Model (DEM) which is generated by using Toutin’s model. Thus, Toutin’s model was run by using OrthoEngineSE of PCI Geomatics 10.3.Thealong-track stereoimages of Advanced Spaceborne Thermal Emission and Reflection radiometer (ASTER) sensor with 15 m resolution were used to produce DEM on an area with low and near Mean Sea Level (MSL) elevation in Johor Malaysia. Despite the satisfactory pre-processing results the visual assessment of the DEM generated from Toutin’s model showed that the DEM contained many outliers and incorrect values. The failure of Toutin’s model may mostly be due to the inaccuracy and insufficiency of ASTER ephemeris data for low terrains as well as huge water body in the stereo images.
Resumo:
As airports continue to become more ‘customer-centric’ their digital customer-facing technologies are increasingly embedded within the passenger journey. This study takes a customer-centric view of airport digital technology by exploring the ways that digital technologies are being applied within airports to improve passenger perspectives of service quality during their journey. The literature review develops a framework encompassing the themes of airport service quality (function, interaction and diversion) and digital strategy. This framework has been applied to six airports exhibiting high service quality. Currently, the findings suggest that the improvement of customer function involves the use of automated and self-service technologies providing passengers greater efficiency and effectiveness during processing points. Additionally, technology to improve experience during wait times may entail either aesthetic qualities, or provide some form of productivity to passengers. Alternatively, customer interaction is influenced by digital technology through constant passenger engagement during their journey. As the research nears completion, the influence of these themes on the framework will become more apparent.
Resumo:
At CRYPTO 2006, Halevi and Krawczyk proposed two randomized hash function modes and analyzed the security of digital signature algorithms based on these constructions. They showed that the security of signature schemes based on the two randomized hash function modes relies on properties similar to the second preimage resistance rather than on the collision resistance property of the hash functions. One of the randomized hash function modes was named the RMX hash function mode and was recommended for practical purposes. The National Institute of Standards and Technology (NIST), USA standardized a variant of the RMX hash function mode and published this standard in the Special Publication (SP) 800-106. In this article, we first discuss a generic online birthday existential forgery attack of Dang and Perlner on the RMX-hash-then-sign schemes. We show that a variant of this attack can be applied to forge the other randomize-hash-then-sign schemes. We point out practical limitations of the generic forgery attack on the RMX-hash-then-sign schemes. We then show that these limitations can be overcome for the RMX-hash-then-sign schemes if it is easy to find fixed points for the underlying compression functions, such as for the Davies-Meyer construction used in the popular hash functions such as MD5 designed by Rivest and the SHA family of hash functions designed by the National Security Agency (NSA), USA and published by NIST in the Federal Information Processing Standards (FIPS). We show an online birthday forgery attack on this class of signatures by using a variant of Dean’s method of finding fixed point expandable messages for hash functions based on the Davies-Meyer construction. This forgery attack is also applicable to signature schemes based on the variant of RMX standardized by NIST in SP 800-106. We discuss some important applications of our attacks and discuss their applicability on signature schemes based on hash functions with ‘built-in’ randomization. Finally, we compare our attacks on randomize-hash-then-sign schemes with the generic forgery attacks on the standard hash-based message authentication code (HMAC).
Resumo:
Investigative journalists who join what theorist Manuel Castells describes as the ‘network society’ can locate potential news sources using various social media platforms and interview them using Web-based communication technologies. The potential for journalistic investigations involving multi-directional conversations with news sources across the globe is beginning to be explored. Potential news sources who are part of the network society have unprecedented access to specialist investigative reporters irrespective of their location and can speak to them more cost effectively than in the past. This paper explores how new journalism technologies are allowing journalists to call powerful individuals and institutions to account, irrespective of national borders; and how previously silenced individuals are being given a voice. To read an example of international investigative journalism facilitated by a combination of social media, Web-based communications, reporter collaboration and news outlet collaborations see http://www.theaustralian.com.au/news/features/churchs-wall-of-silence-on-sexual-abuse/story-e6frg6z6-1226639077238.
Resumo:
Acoustic recordings of the environment provide an effective means to monitor bird species diversity. To facilitate exploration of acoustic recordings, we describe a content-based birdcall retrieval algorithm. A query birdcall is a region of spectrogram bounded by frequency and time. Retrieval depends on a similarity measure derived from the orientation and distribution of spectral ridges. The spectral ridge detection method caters for a broad range of birdcall structures. In this paper, we extend previous work by incorporating a spectrogram scaling step in order to improve the detection of spectral ridges. Compared to an existing approach based on MFCC features, our feature representation achieves better retrieval performance for multiple bird species in noisy recordings.
Resumo:
The pervasive use of the World Wide Web by the general population has created a cultural shift in “our living world”. It has enabled more people to share more information about more events and issues in the world than was possible before its general use. As a consequence, it has transformed traditional news media’s approach to almost every aspect of journalism, with many organisations restructuring their philosophy and practice to include a variety of participatory spaces/forums where people are free to engage in deliberative dialogue about matters of public importance. Moreover, while news media were the traditional gatekeepers of information, today many organisations allow, to different degrees, the general public and other independent journalism entities to participate in the news production process, which may include agenda setting and content production. This paper draws from an international collective case study that showcases various approaches to networked online news journalism. It examines the ways in which different traditional news media models use digital tools and technologies for participatory communication of information about matters of public interest. The research finds differences between the ways in which public service, commercial and independent news media give voice to the public and ultimately their approach to journalism’s role as the Fourth Estate––one of the key institutions of democracy. The work is framed by the notion that journalism in democratic societies has a key role in ensuring citizens are informed and engaged with public affairs. An examination of four media models, the British Broadcasting Corporation (BBC), the Guardian, News Limited and OhmyNews, showcases the various approaches to networked online news journalism and how each provides different avenues for citizen empowerment. The cases are described and analysed in the context of their own social, political and economic setting. Semi-structured in-depth interviews with key senior journalists and editors provide specific information on comparisons between the distinctive practices of their own organisation. In particular these show how the ideal of democracy can be used as a tool of persuasion as much as a method of deliberation.
Resumo:
Color displays used in image processing systems consist of a refresh memory buffer storing digital image data which are converted into analog signals to display an image by driving the primary color channels (red, green, and blue) of a color television monitor. The color cathode ray tube (CRT) of the monitor is unable to reproduce colors exactly due to phosphor limitations, exponential luminance response of the tube to the applied signal, and limitations imposed by the digital-to-analog conversion. In this paper we describe some computer simulation studies (using the U*V*W* color space) carried out to measure these reproduction errors. Further, a procedure to correct for color reproduction error due to the exponential luminance response (gamma) of the picture tube is proposed, using a video-lookup-table and a higher resolution digital-to-analog converter. It is found, on the basis of computer simulation studies, that the proposed gamma correction scheme is effective and robust with respect to variations in the assumed value of the gamma.
Resumo:
The paper presents a new adaptive delta modulator, called the hybrid constant factor incremental delta modulator (HCFIDM), which uses instantaneous as well as syllabic adaptation of the step size. Three instantaneous algorithms have been used: two new instantaneous algorithms (CFIDM-3 and CFIDM-2) and the third, Song's voice ADM (SVADM). The quantisers have been simulated on a digital computer and their performances studied. The figure of merit used is the SNR with correlated, /?C-shaped Gaussian signals and real speech as the input. The results indicate that the hybrid technique is superior to the nonhybrid adaptive quantisers. Also, the two new instantaneous algorithms developed have improved SNR and fast response to step inputs as compared to the earlier systems.
Resumo:
In an earlier paper (Part I) we described the construction of Hermite code for multiple grey-level pictures using the concepts of vector spaces over Galois Fields. In this paper a new algebra is worked out for Hermite codes to devise algorithms for various transformations such as translation, reflection, rotation, expansion and replication of the original picture. Also other operations such as concatenation, complementation, superposition, Jordan-sum and selective segmentation are considered. It is shown that the Hermite code of a picture is very powerful and serves as a mathematical signature of the picture. The Hermite code will have extensive applications in picture processing, pattern recognition and artificial intelligence.
Resumo:
Frequency response analysis is critical in understanding the steady and transient state behavior of any electrical network. Network analyzeror frequency response analyzer is used to determine the frequency response of an electrical network. This paper deals with the design of an inexpensive digitally controlled Network Analyzer. The frequency range of the network analyzer is from 10Hz to 50kHz (suitable range for system studies on most power electronics apparatus). It is composed of a microcontroller (as central processing unit) and a personal computer (as analyzer and display). The communication between the microcontroller and personal computer is established through one of the USB ports. The testing and evaluation of the analyzer is done with RC, RLC and multi-resonant circuits. The design steps, basis of analysis, experimental results, limitation in bandwidth and possible techniques for improvement in performances are presented.
Resumo:
Inverse filters are conventionally used for resolving overlapping signals of identical waveshape. However, the inverse filtering approach is shown to be useful for resolving overlapping signals, identical or otherwise, of unknown waveshapes. Digital inverse filter design based on autocorrelation formulation of linear prediction is known to perform optimum spectral flattening of the input signal for which the filter is designed. This property of the inverse filter is used to accomplish composite signal decomposition. The theory has been presented assuming constituent signals to be responses of all-pole filters. However, the approach may be used for a general situation.
Resumo:
We address the problem of exact complex-wave reconstruction in digital holography. We show that, by confining the object-wave modulation to one quadrant of the frequency domain, and by maintaining a reference-wave intensity higher than that of the object, one can achieve exact complex-wave reconstruction in the absence of noise. A feature of the proposed technique is that the zero-order artifact, which is commonly encountered in hologram reconstruction, can be completely suppressed in the absence of noise. The technique is noniterative and nonlinear. We also establish a connection between the reconstruction technique and homomorphic signal processing, which enables an interpretation of the technique from the perspective of deconvolution. Another key contribution of this paper is a direct link between the reconstruction technique and the two-dimensional Hilbert transform formalism proposed by Hahn. We show that this connection leads to explicit Hilbert transform relations between the magnitude and phase of the complex wave encoded in the hologram. We also provide results on simulated as well as experimental data to validate the accuracy of the reconstruction technique. (C) 2011 Optical Society of America
Resumo:
CDS/ISIS is an advanced non-numerical information storage and retrieval software developed by UNESCO since 1985 to satisfy the need expressed by many institutions, especially in developing countries, to be able to streamline their information processing activities by using modern (and relatively inexpensive) technologies [1]. CDS/ISIS is available for MS-DOS, Windows and Unix operating system platforms. The formatting language of CDS/ISIS is one of its several strengths. It is not only used for formatting records for display but is also used for creating customized indexes. CDS/ISIS by itself does not facilitate in publishing its databases on the Internet nor does it facilitate in publishing on CD-ROMs. However, numbers of open source tools are now available, which enables in publishing CDS/ISIS databases on the Internet and also on CD-ROMs. In this paper, we have discussed the ways and means of integrating CDS/ISIS databases with GSDL, an open source digital library (DL) software.
Resumo:
We present a technique for irreversible watermarking approach robust to affine transform attacks in camera, biomedical and satellite images stored in the form of monochrome bitmap images. The watermarking approach is based on image normalisation in which both watermark embedding and extraction are carried out with respect to an image normalised to meet a set of predefined moment criteria. The normalisation procedure is invariant to affine transform attacks. The result of watermarking scheme is suitable for public watermarking applications, where the original image is not available for watermark extraction. Here, direct-sequence code division multiple access approach is used to embed multibit text information in DCT and DWT transform domains. The proposed watermarking schemes are robust against various types of attacks such as Gaussian noise, shearing, scaling, rotation, flipping, affine transform, signal processing and JPEG compression. Performance analysis results are measured using image processing metrics.