953 resultados para digital text


Relevância:

30.00% 30.00%

Publicador:

Resumo:

USP INFORMATION MANDATE – Resolution 6444 – Oct. 22th, 2012 Make public and accessible the knowledge generated by research developed at USP, encouraging the sharing, the use and generation of new content; •Preserve institutional memory by storing the full text of Intellectual Production (scientific, academic, artistic and technical); •Increase the impact of the knowledge generated in the university within the scientific community and the general public; •It is suggested to all members of the USP community to publish the results of their research, preferably, in open-access publication outlets and/or repositories and to include the permission to deposit their production in the BDPI system in their publication agreements. •Institutional Repository for Intellectual Production; •Official Source USP Statistical Yearbook.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of the parallel vector implementation of the one- and two-dimensional orthogonal transforms is evaluated. The orthogonal transforms are computed using actual or modified fast Fourier transform (FFT) kernels. The factors considered in comparing the speed-up of these vectorized digital signal processing algorithms are discussed and it is shown that the traditional way of comparing th execution speed of digital signal processing algorithms by the ratios of the number of multiplications and additions is no longer effective for vector implementation; the structure of the algorithm must also be considered as a factor when comparing the execution speed of vectorized digital signal processing algorithms. Simulation results on the Cray X/MP with the following orthogonal transforms are presented: discrete Fourier transform (DFT), discrete cosine transform (DCT), discrete sine transform (DST), discrete Hartley transform (DHT), discrete Walsh transform (DWHT), and discrete Hadamard transform (DHDT). A comparison between the DHT and the fast Hartley transform is also included.(34 refs)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The discrete cosine transform (DCT) is an important functional block for image processing applications. The implementation of a DCT has been viewed as a specialized research task. We apply a micro-architecture based methodology to the hardware implementation of an efficient DCT algorithm in a digital design course. Several circuit optimization and design space exploration techniques at the register-transfer and logic levels are introduced in class for generating the final design. The students not only learn how the algorithm can be implemented, but also receive insights about how other signal processing algorithms can be translated into a hardware implementation. Since signal processing has very broad applications, the study and implementation of an extensively used signal processing algorithm in a digital design course significantly enhances the learning experience in both digital signal processing and digital design areas for the students.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pesiqta Rabbati is a unique homiletic midrash that follows the liturgical calendar in its presentation of homilies for festivals and special Sabbaths. This article attempts to utilize Pesiqta Rabbati in order to present a global theory of the literary production of rabbinic/homiletic literature. In respect to Pesiqta Rabbati it explores such areas as dating, textual witnesses, integrative apocalyptic meta-narrative, describing and mapping the structure of the text, internal and external constraints that impacted upon the text, text linguistic analysis, form-analysis: problems in the texts and linguistic gap-filling, transmission of text, strict formalization of a homiletic unit, deconstructing and reconstructing homiletic midrashim based upon form-analytic units of the homily, Neusner’s documentary hypothesis, surface structures of the homiletic unit, and textual variants. The suggested methodology may assist scholars in their production of editions of midrashic works by eliminating superfluous material and in their decoding and defining of ancient texts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is a central premise of the advertising campaigns for nearly all digital communication devices that buying them augments the user: they give us a larger, better memory; make us more “creative” and “productive”; and/or empower us to access whatever information we desire from wherever we happen to be. This study is about how recent popular cinema represents the failure of these technological devices to inspire the enchantment that they once did and opens the question of what is causing this failure. Using examples from the James Bond films, the essay analyzes the ways in which human users are frequently represented as the media connecting and augmenting digital devices and NOT the reverse. It makes use of the debates about the ways in which our subjectivity is itself a networked phenomenon and the extended mind debate from the philosophy of mind. It will prove (1) that this represents an important counter-narrative to the technophilic optimism about augmentation that pervades contemporary advertising, consumer culture, and educational debates; and (2) that this particular discourse of augmentation is really about technological advances and not advances in human capacity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The new knowledge environments of the digital age are oen described as places where we are all closely read, with our buying habits, location, and identities available to advertisers, online merchants, the government, and others through our use of the Internet. This is represented as a loss of privacy in which these entities learn about our activities and desires, using means that were unavailable in the pre-digital era. This article argues that the reciprocal nature of digital networks means 1) that the privacy issues that we face online are not radically different from those of the pre-Internet era, and 2) that we need to reconceive of close reading as an activity of which both humans and computer algorithms are capable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Riparian zones are dynamic, transitional ecosystems between aquatic and terrestrial ecosystems with well defined vegetation and soil characteristics. Development of an all-encompassing definition for riparian ecotones, because of their high variability, is challenging. However, there are two primary factors that all riparian ecotones are dependent on: the watercourse and its associated floodplain. Previous approaches to riparian boundary delineation have utilized fixed width buffers, but this methodology has proven to be inadequate as it only takes the watercourse into consideration and ignores critical geomorphology, associated vegetation and soil characteristics. Our approach offers advantages over other previously used methods by utilizing: the geospatial modeling capabilities of ArcMap GIS; a better sampling technique along the water course that can distinguish the 50-year flood plain, which is the optimal hydrologic descriptor of riparian ecotones; the Soil Survey Database (SSURGO) and National Wetland Inventory (NWI) databases to distinguish contiguous areas beyond the 50-year plain; and land use/cover characteristics associated with the delineated riparian zones. The model utilizes spatial data readily available from Federal and State agencies and geospatial clearinghouses. An accuracy assessment was performed to assess the impact of varying the 50-year flood height, changing the DEM spatial resolution (1, 3, 5 and 10m), and positional inaccuracies with the National Hydrography Dataset (NHD) streams layer on the boundary placement of the delineated variable width riparian ecotones area. The result of this study is a robust and automated GIS based model attached to ESRI ArcMap software to delineate and classify variable-width riparian ecotones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Magmatic volatiles play a crucial role in volcanism, from magma production at depth to generation of seismic phenomena to control of eruption style. Accordingly, many models of volcano dynamics rely heavily on behavior of such volatiles. Yet measurements of emission rates of volcanic gases have historically been limited, which has restricted model verification to processes on the order of days or longer. UV cameras are a recent advancement in the field of remote sensing of volcanic SO2 emissions. They offer enhanced temporal and spatial resolution over previous measurement techniques, but need development before they can be widely adopted and achieve the promise of integration with other geophysical datasets. Large datasets require a means by which to quickly and efficiently use imagery to calculate emission rates. We present a suite of programs designed to semi-automatically determine emission rates of SO2 from series of UV images. Extraction of high temporal resolution SO2 emission rates via this software facilitates comparison of gas data to geophysical data for the purposes of evaluating models of volcanic activity and has already proven useful at several volcanoes. Integrated UV camera and seismic measurements recorded in January 2009 at Fuego volcano, Guatemala, provide new insight into the system’s shallow conduit processes. High temporal resolution SO2 data reveal patterns of SO2 emission rate relative to explosions and seismic tremor that indicate tremor and degassing share a common source process. Progressive decreases in emission rate appear to represent inhibition of gas loss from magma as a result of rheological stiffening in the upper conduit. Measurements of emission rate from two closely-spaced vents, made possible by the high spatial resolution of the camera, help constrain this model. UV camera measurements at Kilauea volcano, Hawaii, in May of 2010 captured two occurrences of lava filling and draining within the summit vent. Accompanying high lava stands were diminished SO2 emission rates, decreased seismic and infrasonic tremor, minor deflation, and slowed lava lake surface velocity. Incorporation of UV camera data into the multi-parameter dataset gives credence to the likelihood of shallow gas accumulation as the cause of such events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Satellite measurement validations, climate models, atmospheric radiative transfer models and cloud models, all depend on accurate measurements of cloud particle size distributions, number densities, spatial distributions, and other parameters relevant to cloud microphysical processes. And many airborne instruments designed to measure size distributions and concentrations of cloud particles have large uncertainties in measuring number densities and size distributions of small ice crystals. HOLODEC (Holographic Detector for Clouds) is a new instrument that does not have many of these uncertainties and makes possible measurements that other probes have never made. The advantages of HOLODEC are inherent to the holographic method. In this dissertation, I describe HOLODEC, its in-situ measurements of cloud particles, and the results of its test flights. I present a hologram reconstruction algorithm that has a sample spacing that does not vary with reconstruction distance. This reconstruction algorithm accurately reconstructs the field to all distances inside a typical holographic measurement volume as proven by comparison with analytical solutions to the Huygens-Fresnel diffraction integral. It is fast to compute, and has diffraction limited resolution. Further, described herein is an algorithm that can find the position along the optical axis of small particles as well as large complex-shaped particles. I explain an implementation of these algorithms that is an efficient, robust, automated program that allows us to process holograms on a computer cluster in a reasonable time. I show size distributions and number densities of cloud particles, and show that they are within the uncertainty of independent measurements made with another measurement method. The feasibility of another cloud particle instrument that has advantages over new standard instruments is proven. These advantages include a unique ability to detect shattered particles using three-dimensional positions, and a sample volume size that does not vary with particle size or airspeed. It also is able to yield two-dimensional particle profiles using the same measurements.