78 resultados para match constraints
Resumo:
Oligomeric copper(I) clusters are formed by the insertion reaction of copper(I) aryloxides into heterocumulenes. The effect of varying the steric demands of the heterocumulene and the aryloxy group on the nuclearity of the oligomers formed has been probed. Reactions with copper(I)2-methoxyphenoxide and copper(I)2-methylphenoxide with PhNCS result in the formation of hexameric complexes hexakis[N-phenylimino(aryloxy)methanethiolato copper(I)] 3 and 4 respectively. Single crystal X-ray data confirmed the structure of 3. Similar insertion reactions of CS2 with the copper(I) aryloxides formed by 2,6-di-tert-butyl-4-methylphenol and 2,6-dimethylphenol result in oligomeric copper(I) complexes 7 and 8 having the (aryloxy)thioxanthate ligand. Complex 7 was confirmed to be a tetramer from single crystal X-ray crystallography. Reactions carried out with 2-mercaptopyrimidine, which has ligating properties similar to N-alkylimino(aryloxy)methanethiolate, result in the formation of an insoluble polymeric complex 11. The fluorescence spectra of oligomeric complexes are helpful in determining their nuclearity. Ir has been shown that a decrease in the steric requirements of either the heterocumulene or aryloxy parts of the ligand can compensate for steric constraints acid facilitate oligomerization. (C) 1999 Elsevier Science Ltd. All rights reserved.
Resumo:
The amount of data contained in electroencephalogram (EEG) recordings is quite massive and this places constraints on bandwidth and storage. The requirement of online transmission of data needs a scheme that allows higher performance with lower computation. Single channel algorithms, when applied on multichannel EEG data fail to meet this requirement. While there have been many methods proposed for multichannel ECG compression, not much work appears to have been done in the area of multichannel EEG. compression. In this paper, we present an EEG compression algorithm based on a multichannel model, which gives higher performance compared to other algorithms. Simulations have been performed on both normal and pathological EEG data and it is observed that a high compression ratio with very large SNR is obtained in both cases. The reconstructed signals are found to match the original signals very closely, thus confirming that diagnostic information is being preserved during transmission.
Resumo:
We report here results from a dynamo model developed on the lines of the Babcock-Leighton idea that the poloidal field is generated at the surface of the Sun from the decay of active regions. In this model magnetic buoyancy is handled with a realistic recipe - wherein toroidal flux is made to erupt from the overshoot layer wherever it exceeds a specified critical field B-C (10(5) G). The erupted toroidal field is then acted upon by the alpha-effect near the surface to give rise to the poloidal field. In this paper we study the effect of buoyancy on the dynamo generated magnetic fields. Specifically, we show that the mechanism of buoyant eruption and the subsequent depletion of the toroidal field inside the overshoot layer, is capable of constraining the magnitude and distribution of the magnetic field there. We also believe that a critical study of this mechanism may give us new information regarding the solar interior and end with an example, where we propose a method for estimating an upper limit of the difusivity within the overshoot layer.
Resumo:
We know, from the classical work of Tarski on real closed fields, that elimination is, in principle, a fundamental engine for mechanized deduction. But, in practice, the high complexity of elimination algorithms has limited their use in the realization of mechanical theorem proving. We advocate qualitative theorem proving, where elimination is attractive since most processes of reasoning take place through the elimination of middle terms, and because the computational complexity of the proof is not an issue. Indeed what we need is the existence of the proof and not its mechanization. In this paper, we treat the linear case and illustrate the power of this paradigm by giving extremely simple proofs of two central theorems in the complexity and geometry of linear programming.
Resumo:
In this work, we evaluate performance of a real-world image processing application that uses a cross-correlation algorithm to compare a given image with a reference one. The algorithm processes individual images represented as 2-dimensional matrices of single-precision floating-point values using O(n4) operations involving dot-products and additions. We implement this algorithm on a nVidia GTX 285 GPU using CUDA, and also parallelize it for the Intel Xeon (Nehalem) and IBM Power7 processors, using both manual and automatic techniques. Pthreads and OpenMP with SSE and VSX vector intrinsics are used for the manually parallelized version, while a state-of-the-art optimization framework based on the polyhedral model is used for automatic compiler parallelization and optimization. The performance of this algorithm on the nVidia GPU suffers from: (1) a smaller shared memory, (2) unaligned device memory access patterns, (3) expensive atomic operations, and (4) weaker single-thread performance. On commodity multi-core processors, the application dataset is small enough to fit in caches, and when parallelized using a combination of task and short-vector data parallelism (via SSE/VSX) or through fully automatic optimization from the compiler, the application matches or beats the performance of the GPU version. The primary reasons for better multi-core performance include larger and faster caches, higher clock frequency, higher on-chip memory bandwidth, and better compiler optimization and support for parallelization. The best performing versions on the Power7, Nehalem, and GTX 285 run in 1.02s, 1.82s, and 1.75s, respectively. These results conclusively demonstrate that, under certain conditions, it is possible for a FLOP-intensive structured application running on a multi-core processor to match or even beat the performance of an equivalent GPU version.
Resumo:
In this article we study the problem of joint congestion control, routing and MAC layer scheduling in multi-hop wireless mesh network, where the nodes in the network are subjected to maximum energy expenditure rates. We model link contention in the wireless network using the contention graph and we model energy expenditure rate constraint of nodes using the energy expenditure rate matrix. We formulate the problem as an aggregate utility maximization problem and apply duality theory in order to decompose the problem into two sub-problems namely, network layer routing and congestion control problem and MAC layer scheduling problem. The source adjusts its rate based on the cost of the least cost path to the destination where the cost of the path includes not only the prices of the links in it but also the prices associated with the nodes on the path. The MAC layer scheduling of the links is carried out based on the prices of the links. We study the e�ects of energy expenditure rate constraints of the nodes on the optimal throughput of the network.
Resumo:
We use the HΙ scale height data along with the HΙ rotation curve as constraints to probe the shape and density profile of the dark matter halos of M31 (Andromeda) and the superthin, low surface brightness (LSB) galaxy UGC 07321. We model the galaxy as a two component system of gravitationally-coupled stars and gas subjected to the force field of a dark matter halo. For M31, we get a flattened halo which is required to match the outer galactic HΙ scale height data, with our best-fit axis ratio (0.4) lying at the most oblate end of the distributions obtained from cosmological simulations. For UGC 07321, our best-fit halo core radius is only slightly larger than the stellar disc scale length, indicating that the halo is important even at small radii in this LSB galaxy. The high value of the gas velocity dispersion required to match the scale height data can explain the low star-formation rate of this galaxy.
Resumo:
The focus of this paper is on designing useful compliant micro-mechanisms of high-aspect-ratio which can be microfabricated by the cost-effective wet etching of (110) orientation silicon (Si) wafers. Wet etching of (110) Si imposes constraints on the geometry of the realized mechanisms because it allows only etch-through in the form of slots parallel to the wafer's flat with a certain minimum length. In this paper, we incorporate this constraint in the topology optimization and obtain compliant designs that meet the specifications on the desired motion for given input forces. Using this design technique and wet etching, we show that we can realize high-aspect-ratio compliant micro-mechanisms. For a (110) Si wafer of 250 µm thickness, the minimum length of the etch opening to get a slot is found to be 866 µm. The minimum achievable width of the slot is limited by the resolution of the lithography process and this can be a very small value. This is studied by conducting trials with different mask layouts on a (110) Si wafer. These constraints are taken care of by using a suitable design parameterization rather than by imposing the constraints explicitly. Topology optimization, as is well known, gives designs using only the essential design specifications. In this work, we show that our technique also gives manufacturable mechanism designs along with lithography mask layouts. Some designs obtained are transferred to lithography masks and mechanisms are fabricated on (110) Si wafers.
Resumo:
Topology optimization methods have been shown to have extensive application in the design of microsystems. However, their utility in practical situations is restricted to predominantly planar configurations due to the limitations of most microfabrication techniques in realizing structures with arbitrary topologies in the direction perpendicular to the substrate. This study addresses the problem of synthesizing optimal topologies in the out-of-plane direction while obeying the constraints imposed by surface micromachining. A new formulation that achieves this by defining a design space that implicitly obeys the manufacturing constraints with a continuous design parameterization is presented in this paper. This is in contrast to including manufacturing cost in the objective function or constraints. The resulting solutions of the new formulation obtained with gradient-based optimization directly provide the photolithographic mask layouts. Two examples that illustrate the approach for the case of stiff structures are included.
Resumo:
We present planforms of line plumes formed on horizontal surfaces in turbulent convection, along with the length of line plumes measured from these planforms, in a six decade range of Rayleigh numbers (10(5) < Ra < 10(11)) and at three Prandtl numbers (Pr = 0.7, 5.2, 602). Using geometric constraints on the relations for the mean plume spacings, we obtain expressions for the total length of near-wall plumes on horizontal surfaces in turbulent convection. The plume length per unit area (L(p)/A), made dimensionless by the near-wall length scale in turbulent convection (Z(w)), remains constant for a given fluid. The Nusselt number is shown to be directly proportional to L(p)H/A for a given fluid layer of height H. The increase in Pr has a weak influence in decreasing L(p)/A. These expressions match the measurements, thereby showing that the assumption of laminar natural convection boundary layers in turbulent convection is consistent with the observed total length of line plumes. We then show that similar relationships are obtained based on the assumption that the line plumes are the outcome of the instability of laminar natural convection boundary layers on the horizontal surfaces.
Resumo:
Garnet-kyanite-staurolite gneiss in the Pangong complex, Ladakh Himalaya, contains porphyroblastic euhedral garnets, blades of kyanite and resorbed staurolite surrounded by a fine-grained muscovite-biotite matrix associated with a leucogranite layer. Sillimanite is absent. The gneiss contains two generations of garnet in cores and rims that represent two stages of metamorphism. Garnet cores are extremely rich in Mn (X(Sps) = 0.35-038) and poor in Fe (X(Alm) = 0.40-0.45), whereas rims are relatively Mn-poor (X(Sps) =0.07-0.08), and rich in Fe (X(Alm), = 0.75-0.77). We suggest that garnet cores formed during prograde metamorphism in a subduction zone followed by abrupt exhumation, during early collision of the Ladakh arc and Karakoram block. The subsequent India-Asia continental collision subducted the metamorphic rocks to a mid-crustal level, where the garnet rims overgrew the Mn-rich cores at ca. 680 degrees C and ca. 8.5 kbar. PT calculations were estimated from phase diagrams calculated using a calculated bulk chemical composition in the Mn-NCKFMASHT system for the garnet-kyanite-staurolite-bearing assemblage. Muscovites from the metamorphic rocks and associated leucogranites have consistent K-Ar ages (ca. 10 Ma), closely related to activation of the Karakoram fault in the Pangong metamorphic complex. These ages indicate the contemporaneity of the exhumation of the metamorphic rocks and the cooling of the leucogranites. (C) 2011 Elsevier B.V. All rights reserved.