76 resultados para very slow


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents an original approach to parametric speech coding at rates below 1 kbitsjsec, primarily for speech storage applications. Essential processes considered in this research encompass efficient characterization of evolutionary configuration of vocal tract to follow phonemic features with high fidelity, representation of speech excitation using minimal parameters with minor degradation in naturalness of synthesized speech, and finally, quantization of resulting parameters at the nominated rates. For encoding speech spectral features, a new method relying on Temporal Decomposition (TD) is developed which efficiently compresses spectral information through interpolation between most steady points over time trajectories of spectral parameters using a new basis function. The compression ratio provided by the method is independent of the updating rate of the feature vectors, hence allows high resolution in tracking significant temporal variations of speech formants with no effect on the spectral data rate. Accordingly, regardless of the quantization technique employed, the method yields a high compression ratio without sacrificing speech intelligibility. Several new techniques for improving performance of the interpolation of spectral parameters through phonetically-based analysis are proposed and implemented in this research, comprising event approximated TD, near-optimal shaping event approximating functions, efficient speech parametrization for TD on the basis of an extensive investigation originally reported in this thesis, and a hierarchical error minimization algorithm for decomposition of feature parameters which significantly reduces the complexity of the interpolation process. Speech excitation in this work is characterized based on a novel Multi-Band Excitation paradigm which accurately determines the harmonic structure in the LPC (linear predictive coding) residual spectra, within individual bands, using the concept 11 of Instantaneous Frequency (IF) estimation in frequency domain. The model yields aneffective two-band approximation to excitation and computes pitch and voicing with high accuracy as well. New methods for interpolative coding of pitch and gain contours are also developed in this thesis. For pitch, relying on the correlation between phonetic evolution and pitch variations during voiced speech segments, TD is employed to interpolate the pitch contour between critical points introduced by event centroids. This compresses pitch contour in the ratio of about 1/10 with negligible error. To approximate gain contour, a set of uniformly-distributed Gaussian event-like functions is used which reduces the amount of gain information to about 1/6 with acceptable accuracy. The thesis also addresses a new quantization method applied to spectral features on the basis of statistical properties and spectral sensitivity of spectral parameters extracted from TD-based analysis. The experimental results show that good quality speech, comparable to that of conventional coders at rates over 2 kbits/sec, can be achieved at rates 650-990 bits/sec.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The collective purpose of these two studies was to determine a link between the V02 slow component and the muscle activation patterns that occur during cycling. Six, male subjects performed an incremental cycle ergometer exercise test to determine asub-TvENT (i.e. 80% of TvENT) and supra-TvENT (TvENT + 0.75*(V02 max - TvENT) work load. These two constant work loads were subsequently performed on either three or four occasions for 8 mins each, with V02 captured on a breath-by-breath basis for every test, and EMO of eight major leg muscles collected on one occasion. EMG was collected for the first 10 s of every 30 s period, except for the very first 10 s period. The V02 data was interpolated, time aligned, averaged and smoothed for both intensities. Three models were then fitted to the V02 data to determine the kinetics responses. One of these models was mono-exponential, while the other two were biexponential. A second time delay parameter was the only difference between the two bi-exponential models. An F-test was used to determine significance between the biexponential models using the residual sum of squares term for each model. EMO was integrated to obtain one value for each 10 s period, per muscle. The EMG data was analysed by a two-way repeated measures ANOV A. A correlation was also used to determine significance between V02 and IEMG. The V02 data during the sub-TvENT intensity was best described by a mono-exponential response. In contrast, during supra-TvENT exercise the two bi-exponential models best described the V02 data. The resultant F-test revealed no significant difference between the two models and therefore demonstrated that the slow component was not delayed relative to the onset of the primary component. Furthermore, only two parameters were deemed to be significantly different based upon the two models. This is in contrast to other findings. The EMG data, for most muscles, appeared to follow the same pattern as V02 during both intensities of exercise. On most occasions, the correlation coefficient demonstrated significance. Although some muscles demonstrated the same relative increase in IEMO based upon increases in intensity and duration, it cannot be assumed that these muscles increase their contribution to V02 in a similar fashion. Larger muscles with a higher percentage of type II muscle fibres would have a larger increase in V02 over the same increase in intensity.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Snakehead fishes in the family Channidae are obligate freshwater fishes represented by two extant genera, the African Parachannna and the Asian Channa. These species prefer still or slow flowing water bodies, where they are top predators that exercise high levels of parental care, have the ability to breathe air, can tolerate poor water quality, and interestingly, can aestivate or traverse terrestrial habitat in response to seasonal changes in freshwater habitat availability. These attributes suggest that snakehead fishes may possess high dispersal potential, irrespective of the terrestrial barriers that would otherwise constrain the distribution of most freshwater fishes. A number of biogeographical hypotheses have been developed to account for the modern distributions of snakehead fishes across two continents, including ancient vicariance during Gondwanan break-up, or recent colonisation tracking the formation of suitable climatic conditions. Taxonomic uncertainty also surrounds some members of the Channa genus, as geographical distributions for some taxa across southern and Southeast (SE) Asia are very large, and in one case is highly disjunct. The current study adopted a molecular genetics approach to gain an understanding of the evolution of this group of fishes, and in particular how the phylogeography of two Asian species may have been influenced by contemporary versus historical levels of dispersal and vicariance. First, a molecular phylogeny was constructed based on multiple DNA loci and calibrated with fossil evidence to provide a dated chronology of divergence events among extant species, and also within species with widespread geographical distributions. The data provide strong evidence that trans-continental distribution of the Channidae arose as a result of dispersal out of Asia and into Africa in the mid–Eocene. Among Asian Channa, deep divergence among lineages indicates that the Oligocene-Miocene boundary was a time of significant species radiation, potentially associated with historical changes in climate and drainage geomorphology. Mid-Miocene divergence among lineages suggests that a taxonomic revision is warranted for two taxa. Deep intra-specific divergence (~8Mya) was also detected between C. striata lineages that occur sympatrically in the Mekong River Basin. The study then examined the phylogeography and population structure of two major taxa, Channa striata (the chevron snakehead) and the C. micropeltes (the giant snakehead), across SE Asia. Species specific microsatellite loci were developed and used in addition to a mitochondrial DNA marker (Cyt b) to screen neutral genetic variation within and among wild populations. C. striata individuals were sampled across SE Asia (n=988), with the major focus being the Mekong Basin, which is the largest drainage basin in the region. The distributions of two divergent lineages were identified and admixture analysis showed that where they co-occur they are interbreeding, indicating that after long periods of evolution in isolation, divergence has not resulted in reproductive isolation. One lineage is predominantly confined to upland areas of northern Lao PDR to the north of the Khorat Plateau, while the other, which is more closely related to individuals from southern India, has a widespread distribution across mainland SE Asian and Sumatra. The phylogeographical pattern recovered is associated with past river networks, and high diversity and divergence among all populations sampled reveal that contemporary dispersal is very low for this taxon, even where populations occur in contiguous freshwater habitats. C. micropeltes (n=280) were also sampled from across the Mekong River Basin, focusing on the lower basin where it constitutes an important wild fishery resource. In comparison with C. striata, allelic diversity and genetic divergence among populations were extremely low, suggesting very recent colonisation of the greater Mekong region. Populations were significantly structured into at least three discrete populations in the lower Mekong. Results of this study have implications for establishing effective conservation plans for managing both species, that represent economically important wild fishery resources for the region. For C. micropeltes, it is likely that a single fisheries stock in the Tonle Sap Great Lake is being exploited by multiple fisheries operations, and future management initiatives for this species in this region will need to account for this. For C. striata, conservation of natural levels of genetic variation will require management initiatives designed to promote population persistence at very localised spatial scales, as the high level of population structuring uncovered for this species indicates that significant unique diversity is present at this fine spatial scale.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Increasingly, large amounts of public and private money are being invested in education and as a result, schools are becoming more accountable to stakeholders for this financial input. In terms of the curriculum, governments worldwide are frequently tying school funding to students‟ and schools‟ academic performances, which are monitored through high-stakes testing programs. To accommodate the resultant pressures from these testing initiatives, many principals are re-focussing their school‟s curriculum on the testing requirements. Such a re-focussing, which was examined critically in this thesis, constituted an externally facilitated rapid approach to curriculum change. In line with previously enacted change theories and recommendations from these, curriculum change in schools has tended to be a fairly slow, considered, collaborative process that is facilitated internally by a deputy-principal (curriculum). However, theoretically based research has shown that such a process has often proved to be difficult and very rarely successful. The present study reports and theorises the experiences of an externally facilitated process that emerged from a practitioner model of change. This case study of the development of the controlled rapid approach to curriculum change began by establishing the reasons three principals initiated curriculum change and why they then engaged an outsider to facilitate the process. It also examined this particular change process from the perspectives of the research participants. The investigation led to the revision of the practitioner model as used in the three schools and challenged the current thinking about the process of school curriculum change. The thesis aims to offer principals and the wider education community an alternative model for consideration when undertaking curriculum change. Finally, the thesis warns that, in the longer term, the application of study‟s revised model (the Controlled Rapid Approach to Curriculum Change [CRACC] Model) may have less then desirable educational consequences.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the main challenges of slow speed machinery condition monitoring is that the energy generated from an incipient defect is too weak to be detected by traditional vibration measurements due to its low impact energy. Acoustic emission (AE) measurement is an alternative for this as it has the ability to detect crack initiations or rubbing between moving surfaces. However, AE measurement requires high sampling frequency and consequently huge amount of data are obtained to be processed. It also requires expensive hardware to capture those data, storage and involves signal processing techniques to retrieve valuable information on the state of the machine. AE signal has been utilised for early detection of defects in bearings and gears. This paper presents an online condition monitoring (CM) system for slow speed machinery, which attempts to overcome those challenges. The system incorporates relevant signal processing techniques for slow speed CM which include noise removal techniques to enhance the signal-to-noise and peak-holding down sampling to reduce the burden of massive data handling. The analysis software works under Labview environment, which enables online remote control of data acquisition, real-time analysis, offline analysis and diagnostic trending. The system has been fully implemented on a site machine and contributing significantly to improve the maintenance efficiency and provide a safer and reliable operation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

How does the image of the future operate upon history, and upon national and individual identities? To what extent are possible futures colonized by the image? What are the un-said futurecratic discourses that underlie the image of the future? Such questions inspired the examination of Japan’s futures images in this thesis. The theoretical point of departure for this examination is Polak’s (1973) seminal research into the theory of the ‘image of the future’ and seven contemporary Japanese texts which offer various alternative images for Japan’s futures, selected as representative of a ‘national conversation’ about the futures of that nation. These seven images of the future are: 1. Report of the Prime Minister’s Commission on Japan’s Goals in the 21st Century—The Frontier Within: Individual Empowerment and Better Governance in the New Millennium, compiled by a committee headed by Japan’s preeminent Jungian psychologist Kawai Hayao (1928-2007); 2. Slow Is Beautiful—a publication by Tsuji Shinichi, in which he re-images Japan as a culture represented by the metaphor of the sloth, concerned with slow and quality-oriented livingry as a preferred image of the future to Japan’s current post-bubble cult of speed and economic efficiency; 3. MuRatopia is an image of the future in the form of a microcosmic prototype community and on-going project based on the historically significant island of Awaji, and established by Japanese economist and futures thinker Yamaguchi Kaoru; 4. F.U.C.K, I Love Japan, by author Tanja Yujiro provides this seven text image of the future line-up with a youth oriented sub-culture perspective on that nation’s futures; 5. IMAGINATION / CREATION—a compilation of round table discussions about Japan’s futures seen from the point of view of Japan’s creative vanguard; 6. Visionary People in a Visionless Country: 21 Earth Connecting Human Stories is a collection of twenty one essays compiled by Denmark born Tokyo resident Peter David Pedersen; and, 7. EXODUS to the Land of Hope, authored by Murakami Ryu, one of Japan’s most prolific and influential writers, this novel suggests a future scenario portraying a massive exodus of Japan’s youth, who, literate with state-of-the-art information and communication technologies (ICTs) move en masse to Japan’s northern island of Hokkaido to launch a cyber-revolution from the peripheries. The thesis employs a Futures Triangle Analysis (FTA) as the macro organizing framework and as such examines both pushes of the present and weights from the past before moving to focus on the pulls to the future represented by the seven texts mentioned above. Inayatullah’s (1999) Causal Layered Analysis (CLA) is the analytical framework used in examining the texts. Poststructuralist concepts derived primarily from the work of Michel Foucault are a particular (but not exclusive) reference point for the analytical approach it encompasses. The research questions which reflect the triangulated analytic matrix are: 1. What are the pushes—in terms of current trends—that are affecting Japan’s futures? 2. What are the historical and cultural weights that influence Japan’s futures? 3. What are the emerging transformative Japanese images of the future discourses, as embodied in actual texts, and what potential do they offer for transformative change in Japan? Research questions one and two are discussed in Chapter five and research question three is discussed in Chapter six. The first two research questions should be considered preliminary. The weights outlined in Chapter five indicate that the forces working against change in Japan are formidable, structurally deep-rooted, wide-spread, and under-recognized as change-adverse. Findings and analyses of the push dimension reveal strong forces towards a potentially very different type of Japan. However it is the seven contemporary Japanese images of the future, from which there is hope for transformative potential, which form the analytical heart of the thesis. In analyzing these texts the thesis establishes the richness of Japan’s images of the future and, as such, demonstrates the robustness of Japan’s stance vis-à-vis the problem of a perceived map-less and model-less future for Japan. Frontier is a useful image of the future, whose hybrid textuality, consisting of government, business, academia, and creative minority perspectives, demonstrates the earnestness of Japan’s leaders in favour of the creation of innovative futures for that nation. Slow is powerful in its aim to reconceptualize Japan’s philosophies of temporality, and build a new kind of nation founded on the principles of a human-oriented and expanded vision of economy based around the core metaphor of slowness culture. However its viability in Japan, with its post-Meiji historical pushes to an increasingly speed-obsessed social construction of reality, could render it impotent. MuRatopia is compelling in its creative hybridity indicative of an advanced IT society, set in a modern day utopian space based upon principles of a high communicative social paradigm, and sustainability. IMAGINATION / CREATION is less the plan than the platform for a new discussion on Japan’s transformation from an econo-centric social framework to a new Creative Age. It accords with emerging discourses from the Creative Industries, which would re-conceive of Japan as a leading maker of meaning, rather than as the so-called guzu, a term referred to in the book meaning ‘laggard’. In total, Love Japan is still the most idiosyncratic of all the images of the future discussed. Its communication style, which appeals to Japan’s youth cohort, establishes it as a potentially formidable change agent in a competitive market of futures images. Visionary People is a compelling image for its revolutionary and subversive stance against Japan’s vision-less political leadership, showing that it is the people, not the futures-making elite or aristocracy who must take the lead and create a new vanguard for the nation. Finally, Murakami’s Exodus cannot be ruled out as a compelling image of the future. Sharing the appeal of Tanja’s Love Japan to an increasingly disenfranchised youth, Exodus portrays a near-term future that is achievable in the here and now, by Japan’s teenagers, using information and communications technologies (ICTs) to subvert leadership, and create utopianist communities based on alternative social principles. The principal contribution from this investigation in terms of theory belongs to that of developing the Japanese image of the future. In this respect, the literature reviews represent a significant compilation, specifically about Japanese futures thinking, the Japanese image of the future, and the Japanese utopia. Though not exhaustive, this compilation will hopefully serve as a useful starting point for future research, not only for the Japanese image of the future, but also for all image of the future research. Many of the sources are in Japanese and their English summations are an added reason to respect this achievement. Secondly, the seven images of the future analysed in Chapter six represent the first time that Japanese image of the future texts have been systematically organized and analysed. Their translation from Japanese to English can be claimed as a significant secondary contribution. What is more, they have been analysed according to current futures methodologies that reveal a layeredness, depth, and overall richness existing in Japanese futures images. Revealing this image-richness has been one of the most significant findings of this investigation, suggesting that there is fertile research to be found from this still under-explored field, whose implications go beyond domestic Japanese concerns, and may offer fertile material for futures thinkers and researchers, Japanologists, social planners, and policy makers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hydrogels, which are three-dimensional crosslinked hydrophilic polymers, have been used and studied widely as vehicles for drug delivery due to their good biocompatibility. Traditional methods to load therapeutic proteins into hydrogels have some disadvantages. Biological activity of drugs or proteins can be compromised during polymerization process or the process of loading protein can be really timeconsuming. Therefore, different loading methods have been investigated. Based on the theory of electrophoresis, an electrochemical gradient can be used to transport proteins into hydrogels. Therefore, an electrophoretic method was used to load protein in this study. Chemically and radiation crosslinked polyacrylamide was used to set up the model to load protein electrophoretically into hydrogels. Different methods to prepare the polymers have been studied and have shown the effect of the crosslinker (bisacrylamide) concentration on the protein loading and release behaviour. The mechanism of protein release from the hydrogels was anomalous diffusion (i.e. the process was non-Fickian). The UV-Vis spectra of proteins before and after reduction show that the bioactivities of proteins after release from hydrogel were maintained. Due to the concern of cytotoxicity of residual monomer in polyacrylamide, poly(2-hydroxyethyl- methacrylate) (pHEMA) was used as the second tested material. In order to control the pore size, a polyethylene glycol (PEG) porogen was introduced to the pHEMA. The hydrogel disintegrated after immersion in water indicating that the swelling forces exceeded the strength of the material. In order to understand the cause of the disintegration, several different conditions of crosslinker concentration and preparation method were studied. However, the disintegration of the hydrogel still occurred after immersion in water principally due to osmotic forces. A hydrogel suitable for drug delivery needs to be biocompatible and also robust. Therefore, an approach to improving the mechanical properties of the porogen-containing pHEMA hydrogel by introduction of an inter-penetrating network (IPN) into the hydrogel system has been researched. A double network was formed by the introduction of further HEMA solution into the system by both electrophoresis and slow diffusion. Raman spectroscopy was used to observe the diffusion of HEMA into the hydrogel prior to further crosslinking by ã-irradiation. The protein loading and release behaviour from the hydrogel showing enhanced mechanical property was also studied. Biocompatibility is a very important factor for the biomedical application of hydrogels. Different hydrogels have been studied on both a three-dimensional HSE model and a HSE wound model for their biocompatibilities. They did not show any detrimental effect to the keratinocyte cells. From the results reported above, these hydrogels show good biocompatibility in both models. Due to the advantage of the hydrogels such as the ability to absorb and deliver protein or drugs, they have potential to be used as topical materials for wound healing or other biomedical applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The flood flow in urbanised areas constitutes a major hazard to the population and infrastructure as seen during the summer 2010-2011 floods in Queensland (Australia). Flood flows in urban environments have been studied relatively recently, although no study considered the impact of turbulence in the flow. During the 12-13 January 2011 flood of the Brisbane River, some turbulence measurements were conducted in an inundated urban environment in Gardens Point Road next to Brisbane's central business district (CBD) at relatively high frequency (50 Hz). The properties of the sediment flood deposits were characterised and the acoustic Doppler velocimeter unit was calibrated to obtain both instantaneous velocity components and suspended sediment concentration in the same sampling volume with the same temporal resolution. While the flow motion in Gardens Point Road was subcritical, the water elevations and velocities fluctuated with a distinctive period between 50 and 80 s. The low frequency fluctuations were linked with some local topographic effects: i.e, some local choke induced by an upstream constriction between stairwells caused some slow oscillations with a period close to the natural sloshing period of the car park. The instantaneous velocity data were analysed using a triple decomposition, and the same triple decomposition was applied to the water depth, velocity flux, suspended sediment concentration and suspended sediment flux data. The velocity fluctuation data showed a large energy component in the slow fluctuation range. For the first two tests at z = 0.35 m, the turbulence data suggested some isotropy. At z = 0.083 m, on the other hand, the findings indicated some flow anisotropy. The suspended sediment concentration (SSC) data presented a general trend with increasing SSC for decreasing water depth. During a test (T4), some long -period oscillations were observed with a period about 18 minutes. The cause of these oscillations remains unknown to the authors. The last test (T5) took place in very shallow waters and high suspended sediment concentrations. It is suggested that the flow in the car park was disconnected from the main channel. Overall the flow conditions at the sampling sites corresponded to a specific momentum between 0.2 to 0.4 m2 which would be near the upper end of the scale for safe evacuation of individuals in flooded areas. But the authors do not believe the evacuation of individuals in Gardens Point Road would have been safe because of the intense water surges and flow turbulence. More generally any criterion for safe evacuation solely based upon the flow velocity, water depth or specific momentum cannot account for the hazards caused by the flow turbulence, water depth fluctuations and water surges.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The uniformization method (also known as randomization) is a numerically stable algorithm for computing transient distributions of a continuous time Markov chain. When the solution is needed after a long run or when the convergence is slow, the uniformization method involves a large number of matrix-vector products. Despite this, the method remains very popular due to its ease of implementation and its reliability in many practical circumstances. Because calculating the matrix-vector product is the most time-consuming part of the method, overall efficiency in solving large-scale problems can be significantly enhanced if the matrix-vector product is made more economical. In this paper, we incorporate a new relaxation strategy into the uniformization method to compute the matrix-vector products only approximately. We analyze the error introduced by these inexact matrix-vector products and discuss strategies for refining the accuracy of the relaxation while reducing the execution cost. Numerical experiments drawn from computer systems and biological systems are given to show that significant computational savings are achieved in practical applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of ‘strategic dalliances’– defined as non-committal relationships that companies can ‘dip in and out of,’ or dally with, while simultaneously maintaining longer-term strategic partnerships with other firms and suppliers – has emerged as a promising strategy by which organizations can create discontinuous innovations. But does this approach work equally well for every sector? Moreover, how can these links be effectively used to foster the process of discontinuous innovation? Toward assessing the role that industry clockspeed plays in the success or failure of strategic dalliances, we provide case study evidence from Twister BV, an upstream oil and gas technology provider, and show that strategic dalliances can be an enabler for the discontinuous innovation process in slow clockspeed industries. Implications for research and practice are discussed, and conclusions from our findings are drawn.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Originally launched in 2005 with a focus on user-generated content, YouTube has become the dominant platform for online video worldwide, and an important location for some of the most significant trends and controversies in the contemporary new-media environment. Throughout its very short history, it has also intersected with and been the focus of scholarly debates related to the politics, economics, and cultures of the new media—in particular, the “participatory turn” associated with “Web 2.0” business models’ partial reliance on amateur content and social networking. Given the slow pace of traditional scholarly publishing, the body of media and cultural studies literature substantively dedicated to describing and critically understanding YouTube’s texts, practices, and politics is still small, but it is growing steadily. At the same time, since its inception scholars from a wide range of disciplines and critical perspectives have found YouTube useful as a source of examples and case studies, some of which are included here; others have experimented directly with the scholarly and educational potential of the platform itself. For these reasons, although primarily based around the traditional publishing outlets for media, Internet, and cultural studies, this bibliography draws eclectically on a wide range of sources—including sources very closely associated with the web business literature and with the YouTube community itself.