961 resultados para Large Marangoni Number


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large-eddy simulation is used to predict heat transfer in the separated and reattached flow regions downstream of a backward-facing step. Simulations were carried out at a Reynolds number of 28 000 (based on the step height and the upstream centreline velocity) with a channel expansion ratio of 1.25. The Prandtl number was 0.71. Two subgrid-scale models were tested, namely the dynamic eddy-viscosity, eddy-diffusivity model and the dynamic mixed model. Both models showed good overall agreement with available experimental data. The simulations indicated that the peak in heat-transfer coefficient occurs slightly upstream of the mean reattachment location, in agreement with experimental data. The results of these simulations have been analysed to discover the mechanisms that cause this phenomenon. The peak in heat-transfer coefficient shows a direct correlation with the peak in wall shear-stress fluctuations. It is conjectured that the peak in these fluctuations is caused by an impingement mechanism, in which large eddies, originating in the shear layer, impact the wall just upstream of the mean reattachment location. These eddies cause a 'downwash', which increases the local heat-transfer coefficient by bringing cold fluid from above the shear layer towards the wall.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A parallel computing environment to support optimization of large-scale engineering systems is designed and implemented on Windows-based personal computer networks, using the master-worker model and the Parallel Virtual Machine (PVM). It is involved in decomposition of a large engineering system into a number of smaller subsystems optimized in parallel on worker nodes and coordination of subsystem optimization results on the master node. The environment consists of six functional modules, i.e. the master control, the optimization model generator, the optimizer, the data manager, the monitor, and the post processor. Object-oriented design of these modules is presented. The environment supports steps from the generation of optimization models to the solution and the visualization on networks of computers. User-friendly graphical interfaces make it easy to define the problem, and monitor and steer the optimization process. It has been verified by an example of a large space truss optimization. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present new results of our wide-field redshift survey of galaxies in a 182 square degree region of the Shapley Supercluster (SSC) based on observations with the FLAIR-II spectrograph on the UK Schmidt Telescope (UKST). In this paper we present new measurements to give a total sample of redshifts for 710 bright (R less than or equal to 16.6) galaxies, of which 464 are members of the SSC (8000 < υ < 18 000 km s(-1)). Our data reveal that the main plane of the SSC (upsilon approximate to 14 500 km s(-1)) extends further than previously realised, filling the whole extent of our survey region of 10 degrees by 20 degrees on the sky (35 Mpc by 70 Mpc, for H-0 = 75 km s(-1) Mpc(-1)). There is also a significant structure associated with the slightly nearer Abell 3571 cluster complex (upsilon approximate to 12 000 km s(-1)) with a caustic structure evident out to a radius of 6 Mpc. These galaxies seem to link two previously identified sheets of galaxies and establish a connection with a third one at (V) over bar = 15 000 km s(-1) near RA = 13(h). They also tend to fill the gap of galaxies between the foreground Hydra-Centaurus region and the more distant SSC. We calculate galaxy overdensities of 5.0+/-0.1 over the 182 square degree region surveyed and 3.3+.-0.1 in a 159 square degree region excluding rich clusters. Over the large region of our survey the inter-cluster galaxies make up 46 per cent of all galaxies in the SSC region and may contribute a similar amount of mass to the cluster galaxies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We isolated 12 polymorphic microsatellite markers for the large-billed scrubwren Sericornis magnirostris from genomic libraries enriched for (AAGG)(n) and (AACC)(n) repetitive elements and characterized them in 11 individuals. The number of alleles ranged from four to 15 per locus with the observed heterozygosity ranging from 0.14 to 0.91. These markers will be useful to address questions concerning population genetic structure and models of speciation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Summarizing topological relations is fundamental to many spatial applications including spatial query optimization. In this article, we present several novel techniques to effectively construct cell density based spatial histograms for range (window) summarizations restricted to the four most important level-two topological relations: contains, contained, overlap, and disjoint. We first present a novel framework to construct a multiscale Euler histogram in 2D space with the guarantee of the exact summarization results for aligned windows in constant time. To minimize the storage space in such a multiscale Euler histogram, an approximate algorithm with the approximate ratio 19/12 is presented, while the problem is shown NP-hard generally. To conform to a limited storage space where a multiscale histogram may be allowed to have only k Euler histograms, an effective algorithm is presented to construct multiscale histograms to achieve high accuracy in approximately summarizing aligned windows. Then, we present a new approximate algorithm to query an Euler histogram that cannot guarantee the exact answers; it runs in constant time. We also investigate the problem of nonaligned windows and the problem of effectively partitioning the data space to support nonaligned window queries. Finally, we extend our techniques to 3D space. Our extensive experiments against both synthetic and real world datasets demonstrate that the approximate multiscale histogram techniques may improve the accuracy of the existing techniques by several orders of magnitude while retaining the cost efficiency, and the exact multiscale histogram technique requires only a storage space linearly proportional to the number of cells for many popular real datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is often debated whether migraine with aura (MA) and migraine without aura (MO) are etiologically distinct disorders. A previous study using latent class analysis (LCA) in Australian twins showed no evidence for separate subtypes of MO and MA. The aim of the present study was to replicate these results in a population of Dutch twins and their parents, siblings and partners (N = 10,144). Latent class analysis of International Headache Society (IHS)-based migraine symptoms resulted in the identification of 4 classes: a class of unaffected subjects (class 0), a mild form of nonmigrainous headache (class 1), a moderately severe type of migraine (class 2), typically without neurological symptoms or aura (8% reporting aura symptoms), and a severe type of migraine (class 3), typically with neurological symptoms, and aura symptoms in approximately half of the cases. Given the overlap of neurological symptoms and nonmutual exclusivity of aura symptoms, these results do not support the MO and MA subtypes as being etiologically distinct. The heritability in female twins of migraine based on LCA classification was estimated at .50 (95% confidence intervals [0CI} .27 -.59), similar to IHS-based migraine diagnosis (h(2) = .49, 95% Cl .19-.57). However, using a dichotomous classification (affected-unaffected) decreased heritability for the IHS-based classification (h(2) = .33, 95% Cl .00-.60), but not the LCA-based classification (h(2) = .51, 95% Cl. 23-.61). Importantly, use of the LCA-based classification increased the number of subjects classified as affected. The heritability of the screening question was similar to more detailed LCA and IHS classifications, suggesting that the screening procedure is an important determining factor in genetic studies of migraine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Heparin-induced thrombocytopenia (HIT) is a potentially serious adverse reaction caused by platelet-activating antibodies. Aim: To describe experience with HIT. Methods: Twenty-two patients identified by laboratory records of heparin-associated antibodies with a 50% or greater decrease in platelet count were reviewed in our 600-bed metropolitan teaching hospital from 1999 to April 2005. Results: There was an increase in the frequency of HIT diagnosed during the review period, which was associated with a rise in the number of requests for HIT antibodies. Thrombotic complications were identified in 14 of 22 patients with HIT. Mean age was 65 years, and 11 patients were men. Seven patients died and HIT was considered contributory in four. One patient required mid-forearm amputation. Unfractionated heparin was used in all cases and five patients also received enoxaparin. Mean time to HIT screen, reflecting when the diagnosis was first suspected, was 14 days. Platelet nadir ranged from 6 x 10(9)/L to 88 x 10(9)/L, with a percentage drop in platelet count of 67-96%. Alternative anticoagulation (danaparoid) was not used in three patients, two of whom died. Conclusions: HIT is a potentially life-threatening complication of heparin therapy, associated with a fall in platelet count and a high incidence of thromboembolic complications. It is most frequently seen using unfractionated heparin therapy. The increase in frequency of HIT diagnosed in our hospital appears to be associated with a greater awareness of the entity, although detection is often delayed. Platelet count should be monitored in patients on heparin and the presence of antiplatelet antibodies determined if HIT is suspected. Treatment involves both discontinuation of heparin and the use of an alternative anticoagulant such as danaparoid because of the persisting risk of thrombosis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The inherent self-recognition properties of DNA have led to its use as a scaffold for various nanotechnology self-assembly applications, with macromolecular complexes, metallic and semiconducting nanoparticles, proteins, inter alia, being assembled onto a designed DNA scaffold. Such structures may typically comprise a number of DNA molecules organized into macromolecules. Many studies have used synthetic methods to produce the constituent DNA molecules, but this typically constrains the molecules to be no longer than around 100 base pairs (30 nm). However, applications that require larger self-assembling DNA complexes, several tens of nanometers or more, need to be generated by other techniques. Here, we present a generic technique to generate large linear, branched, and/or circular DNA macromolecular complexes. The effectiveness of this technique is demonstrated here by the use of Lambda Bacteriophage DNA as a template to generate single- and double-branched DNA structures approximately 120 nm in size.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have developed a way to represent Mohr-Coulomb failure within a mantle-convection fluid dynamics code. We use a viscous model of deformation with an orthotropic viscoplasticity (a different viscosity is used for pure shear to that used for simple shear) to define a prefered plane for slip to occur given the local stress field. The simple-shear viscosity and the deformation can then be iterated to ensure that the yield criterion is always satisfied. We again assume the Boussinesq approximation, neglecting any effect of dilatancy on the stress field. An additional criterion is required to ensure that deformation occurs along the plane aligned with maximum shear strain-rate rather than the perpendicular plane, which is formally equivalent in any symmetric formulation. We also allow for strain-weakening of the material. The material can remember both the accumulated failure history and the direction of failure. We have included this capacity in a Lagrangian-integration-point finite element code and show a number of examples of extension and compression of a crustal block with a Mohr-Coulomb failure criterion. The formulation itself is general and applies to 2- and 3-dimensional problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With rapid advances in video processing technologies and ever fast increments in network bandwidth, the popularity of video content publishing and sharing has made similarity search an indispensable operation to retrieve videos of user interests. The video similarity is usually measured by the percentage of similar frames shared by two video sequences, and each frame is typically represented as a high-dimensional feature vector. Unfortunately, high complexity of video content has posed the following major challenges for fast retrieval: (a) effective and compact video representations, (b) efficient similarity measurements, and (c) efficient indexing on the compact representations. In this paper, we propose a number of methods to achieve fast similarity search for very large video database. First, each video sequence is summarized into a small number of clusters, each of which contains similar frames and is represented by a novel compact model called Video Triplet (ViTri). ViTri models a cluster as a tightly bounded hypersphere described by its position, radius, and density. The ViTri similarity is measured by the volume of intersection between two hyperspheres multiplying the minimal density, i.e., the estimated number of similar frames shared by two clusters. The total number of similar frames is then estimated to derive the overall similarity between two video sequences. Hence the time complexity of video similarity measure can be reduced greatly. To further reduce the number of similarity computations on ViTris, we introduce a new one dimensional transformation technique which rotates and shifts the original axis system using PCA in such a way that the original inter-distance between two high-dimensional vectors can be maximally retained after mapping. An efficient B+-tree is then built on the transformed one dimensional values of ViTris' positions. Such a transformation enables B+-tree to achieve its optimal performance by quickly filtering a large portion of non-similar ViTris. Our extensive experiments on real large video datasets prove the effectiveness of our proposals that outperform existing methods significantly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We carried out a retrospective review of the videoconference activity records in a university-run hospital telemedicine studio. Usage records describing videoconferencing activity in the telemedicine studio were compared with the billing records provided by the telecommunications company. During a seven-month period there were 211 entries in the studio log: 108 calls made from the studio and 103 calls made from a far-end location. We found that 103 calls from a total of 195 calls reported by the telecommunications company were recorded in the usage log. The remaining 92 calls were not recorded, probably for one of several reasons, including: failed calls-a large number of unrecorded calls (57%) lasted for less than 2 min (median 1.6 min); origin of videoconference calls-calls may have been recorded incorrectly in the usage diary (i.e. as being initiated from the far end, when actually initiated from the studio); and human error. Our study showed that manual recording of videoconference activity may not accurately reflect the actual activity taking place. Those responsible for recording and analysing videoconference activity, particularly in large telemedicine networks, should do so with care.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Summarizing topological relations is fundamental to many spatial applications including spatial query optimization. In this paper, we present several novel techniques to eectively construct cell density based spatial histograms for range (window) summarizations restricted to the four most important topological relations: contains, contained, overlap, and disjoint. We rst present a novel framework to construct a multiscale histogram composed of multiple Euler histograms with the guarantee of the exact summarization results for aligned windows in constant time. Then we present an approximate algorithm, with the approximate ratio 19/12, to minimize the storage spaces of such multiscale Euler histograms, although the problem is generally NP-hard. To conform to a limited storage space where only k Euler histograms are allowed, an effective algorithm is presented to construct multiscale histograms to achieve high accuracy. Finally, we present a new approximate algorithm to query an Euler histogram that cannot guarantee the exact answers; it runs in constant time. Our extensive experiments against both synthetic and real world datasets demonstrated that the approximate mul- tiscale histogram techniques may improve the accuracy of the existing techniques by several orders of magnitude while retaining the cost effciency, and the exact multiscale histogram technique requires only a storage space linearly proportional to the number of cells for the real datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

N-tuple recognition systems (RAMnets) are normally modeled using a small number of input lines to each RAM, because the address space grows exponentially with the number of inputs. It is impossible to implement an arbitrarily-large address space as physical memory. But given modest amounts of training data, correspondingly modest numbers of bits will be set in that memory. Hash arrays can therefore be used instead of a direct implementation of the required address space. This paper describes some exploratory experiments using the hash array technique to investigate the performance of RAMnets with very large numbers of input lines. An argument is presented which concludes that performance should peak at a relatively small n-tuple size, but the experiments carried out so far contradict this. Further experiments are needed to confirm this unexpected result.