938 resultados para PART II
Resumo:
The paper details further experiments conducted for a reduction in the depth of belly of a 13.69 m (45') four seam shrimp trawl net. The investigations have given conclusive evidence that the optimum depth of belly for this particular trawl design should be 70 meshes.
Resumo:
The composition of the time-resolved surface pressure field around a high-pressure rotor blade caused by the presence of neighboring blade rows was studied, with the individual effects of wake, shock and potential field interaction being determined. Two test geometries were considered: first, a high-pressure turbine stage coupled with a swan-necked diffuser exit duct; secondly, the same high-pressure stage but with a vane located in the downstream duct. Both tests were carried out at engine-representative Mach and Reynolds numbers. By comparing the results to time-resolved computational predictions of the flowfield, the accuracy with which the computation predicts blade interaction was determined. Evidence was obtained that for a large downstream vane, the flow conditions in the rotor passage, at any instant in time, are close to being steady state.
Resumo:
The first report of report series I, II and III entitled 'basic principles' presented details of the binders and technologies available and used in the stabilisation/ solidification (S/S) treatment of hazardous waste and contaminated land. This second report entitled 'research' presents an overview of the main research work, both experimental and numerical, carried out in the UK concentrating on the last decade or so but also highlighting earlier significant research work. The research work is reported under the headings of the individual binders and for each binder the work is presented in chronological order. In this work, most of the S/S materials are prepared by manual/mechanical mixing. The latter part of this report presents research work on S/S materials prepared using soil mixing with mixing augers. © 2005 Taylor & Francis Group.
Resumo:
An experimental investigation of a turbine stage featuring very high end wall angles is presented. The initial turbine design did not achieve a satisfactory performance and the difference between the design predictions and the test results was traced to a large separated region on the rear suction-surface. To improve the agreement between computational fluid dynamics (CFD) and experiment, it was found necessary to modify the turbulence modeling employed. The modified CFD code was then used to redesign the vane, and the changes made are described. When tested, the performance of the redesigned vane was found to have much closer agreement with the predictions than the initial vane. Finally, the flowfield and performance of the redesigned stage are compared to a similar turbine, designed to perform the same duty, which lies in an annulus of moderate end wall angles. A reduction in stage efficiency of at least 2.4% was estimated for the very high end wall angle design. © 2014 by ASME.
Resumo:
© 2014 by ASME. This paper, the second of two parts, presents a new setup for the two-stage two-spool facility located at the Institute for Thermal Turbomachinery and Machine Dynamics (ITTM) of Graz University of Technology. The rig was designed to reproduce the flow behavior of a transonic turbine followed by a counter-rotating low pressure stage such as those in high bypass aero-engines. The meridional flow path of the machine is characterized by a diffusing S-shaped duct between the two rotors. The role of wide chord vanes placed into the mid turbine frame is to lead the flow towards the low pressure (LP) rotor with appropriate swirl. Experimental and numerical investigations performed on this setup showed that the wide chord struts induce large wakes and extended secondary flows at the LP inlet flow. Moreover, large deterministic fluctuations of pressure, which may cause noise and blade vibrations, were observed downstream of the LP rotor. In order to minimize secondary vortices and to damp the unsteady interactions, the mid turbine frame was redesigned to locate two zero-lift splitters into each vane passage. While in the first part of the paper the design process of the splitters and the time-averaged flow field were presented, in this second part the measurements performed by means of a fast response probe will support the explanation of the time-resolved field. The discussion will focus on the comparison between the baseline case (without splitters) and the embedded design.
Resumo:
The yttrium(III) extraction kinetics and mechanism with bis-(2,4,4-trimethyl-pentyl) phosphinic acid (Cyanex 272, HA) dissolved in heptane have been investigated by constant interfacial cell with laminar flow. The data has been analyzed in terms of pseudo-first order constants. Studies on the effects of stirring rate, temperature, acidity in aqueous phase, and extractant concentration on the extraction rate show that the extraction regime is dependent on the extraction conditions. The plot of interfacial area on the rate has shown a linear relationship. This fact together with the strong surface activity of Cyanex 272 at heptane-water interfaces has made the interface the most probable location for the chemical reactions. The forward, reverse rate equations and extraction rate constant for the yttrium extraction with Cyanex 272 have been obtained under the experimental conditions. The rate-determining step has been also predicted from interfacial reaction models. The predictions have been found to be in good agreement with the rate equations obtained from experimental data, confirming the basic assumption that the chemical reaction is located at the liquid-liquid interface.
Resumo:
In the first part of this paper we reviewed the fingerprint classification literature from two different perspectives: the feature extraction and the classifier learning. Aiming at answering the question of which among the reviewed methods would perform better in a real implementation we end up in a discussion which showed the difficulty in answering this question. No previous comparison exists in the literature and comparisons among papers are done with different experimental frameworks. Moreover, the difficulty in implementing published methods was stated due to the lack of details in their description, parameters and the fact that no source code is shared. For this reason, in this paper we will go through a deep experimental study following the proposed double perspective. In order to do so, we have carefully implemented some of the most relevant feature extraction methods according to the explanations found in the corresponding papers and we have tested their performance with different classifiers, including those specific proposals made by the authors. Our aim is to develop an objective experimental study in a common framework, which has not been done before and which can serve as a baseline for future works on the topic. This way, we will not only test their quality, but their reusability by other researchers and will be able to indicate which proposals could be considered for future developments. Furthermore, we will show that combining different feature extraction models in an ensemble can lead to a superior performance, significantly increasing the results obtained by individual models.
Resumo:
For pt. I see ibid., vol. 44, p. 927-36 (1997). In a digital communications system, data are transmitted from one location to another by mapping bit sequences to symbols, and symbols to sample functions of analog waveforms. The analog waveform passes through a bandlimited (possibly time-varying) analog channel, where the signal is distorted and noise is added. In a conventional system the analog sample functions sent through the channel are weighted sums of one or more sinusoids; in a chaotic communications system the sample functions are segments of chaotic waveforms. At the receiver, the symbol may be recovered by means of coherent detection, where all possible sample functions are known, or by noncoherent detection, where one or more characteristics of the sample functions are estimated. In a coherent receiver, synchronization is the most commonly used technique for recovering the sample functions from the received waveform. These sample functions are then used as reference signals for a correlator. Synchronization-based coherent receivers have advantages over noncoherent receivers in terms of noise performance, bandwidth efficiency (in narrow-band systems) and/or data rate (in chaotic systems). These advantages are lost if synchronization cannot be maintained, for example, under poor propagation conditions. In these circumstances, communication without synchronization may be preferable. The theory of conventional telecommunications is extended to chaotic communications, chaotic modulation techniques and receiver configurations are surveyed, and chaotic synchronization schemes are described
Resumo:
In judicial decision making, the doctrine of chances takes explicitly into account the odds. There is more to forensic statistics, as well as various probabilistic approaches which taken together form the object of an enduring controversy in the scholarship of legal evidence. In this paper, we reconsider the circumstances of the Jama murder and inquiry (dealt with in Part I of this paper: "The Jama Model. On Legal Narratives and Interpretation Patterns"), to illustrate yet another kind of probability or improbability. What is improbable about the Jama story, is actually a given, which contributes in terms of dramatic underlining. In literary theory, concepts of narratives being probable or improbable date back from the eighteenth century, when both prescientific and scientific probability was infiltrating several domains, including law. An understanding of such a backdrop throughout the history of ideas is, I claim, necessary for AI researchers who may be tempted to apply statistical methods to legal evidence. The debate for or against probability (and especially bayesian probability) in accounts of evidence has been flouishing among legal scholars. Nowadays both the the Bayesians (e.g. Peter Tillers) and Bayesioskeptics (e.g. Ron Allen) among those legal scholars whoare involved in the controversy are willing to give AI researchers a chance to prove itself and strive towards models of plausibility that would go beyond probability as narrowly meant. This debate within law, in turn, has illustrious precedents: take Voltaire, he was critical of the application or probability even to litigation in civil cases; take Boole, he was a starry-eyed believer in probability applications to judicial decision making (Rosoni 1995). Not unlike Boole, the founding father of computing, nowadays computer scientists approaching the field may happen to do so without full awareness of the pitfalls. Hence, the usefulness of the conceptual landscape I sketch here.