878 resultados para Multi-component systems


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new macroporous stationary phase bearing 'tweezer' receptors that exhibit specificity for cholesterol has been constructed from rigid multifunctional vinylic monomers derived from 3,5-dibromobenzoic acid, propargyl alcohol and cholesterol. The synthesis of the novel tweezer monomer that contains two cholesterol receptor arms using palladium mediated Sonogashira methodologies and carbonate couplings is reported. The subsequent co-polymerisation of this tweezer monomer with a range of cross-linking agents via a 'pseudo' molecular imprinting approach afforded a diverse set of macroporous materials. The selectivity and efficacy of these materials for cholesterol binding was assessed using a chromatographic screening process. The optimum macroporous stationary phase material composition was subsequently used to construct monolithic solid phase extraction columns for use in the selective extraction of cholesterol from multi-component mixtures of structurally related steroids.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The emergence of the mechanical bond during the past 25 years is giving chemistry a fillip in more ways than one. While its arrival on the scene is already impacting materials science and molecular nanotechnology, it is providing a new lease of life to chemical synthesis where mechanical bond formation Occurs as a consequence of the all-important templation Orchestrated by molecular recognition and self-assembly. The way in which covalent bond formation activates noncovalent bonding interactions, switching on molecular recognition that leads to self-assembly, and the template-directed synthesis of mechanically interlocked molecules-of which the so-called catenanes and rotaxanes may be regarded as the prototypes-has introduced a level of integration into chemical synthesis that has not previously been attained jointly at the supramolecular and molecular levels. The challenge now is to carry this I vel of integration during molecular synthesis beyond relatively small molecules into the realms of precisely functionalized extended molecular Structures and superstructures that perform functions in a collective manner as the key sources of instruction, activation, and performance in multi-component integrated Circuits and devices. These forays into organic chemistry by a scientific nomad are traced through thick and thin from the Athens of the North to the Windy City by Lake Michigan with interludes on the edge of the Canadian Shield beside Lake Ontario, in the Socialist Republic of South Yorkshire, on the Plains of Cheshire beside the Wirral, in the Midlands in the Heartland of Albion, and in the City of Angels beside the Peaceful Sea. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Can autonomic computing concepts be applied to traditional multi-core systems found in high performance computing environments? In this paper, we propose a novel synergy between parallel computing and swarm robotics to offer a new computing paradigm, `Swarm-Array Computing' that can harness and apply autonomic computing for parallel computing systems. One approach among three proposed approaches in swarm-array computing based on landscapes of intelligent cores, in which the cores of a parallel computing system are abstracted to swarm agents, is investigated. A task gets executed and transferred seamlessly between cores in the proposed approach thereby achieving self-ware properties that characterize autonomic computing. FPGAs are considered as an experimental platform taking into account its application in space robotics. The feasibility of the proposed approach is validated on the SeSAm multi-agent simulator.

Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new identification algorithm is introduced for the Hammerstein model consisting of a nonlinear static function followed by a linear dynamical model. The nonlinear static function is characterised by using the Bezier-Bernstein approximation. The identification method is based on a hybrid scheme including the applications of the inverse of de Casteljau's algorithm, the least squares algorithm and the Gauss-Newton algorithm subject to constraints. The related work and the extension of the proposed algorithm to multi-input multi-output systems are discussed. Numerical examples including systems with some hard nonlinearities are used to illustrate the efficacy of the proposed approach through comparisons with other approaches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A one-dimensional shock-reflection test problem in the case of slab, cylindrical or spherical symmetry is discussed for multi-component flows. The differential equations for a similarity solution are derived and then solved numerically in conjunction with the Rankine-Hugoniot shock relations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Quantitative analysis by mass spectrometry (MS) is a major challenge in proteomics as the correlation between analyte concentration and signal intensity is often poor due to varying ionisation efficiencies in the presence of molecular competitors. However, relative quantitation methods that utilise differential stable isotope labelling and mass spectrometric detection are available. Many drawbacks inherent to chemical labelling methods (ICAT, iTRAQ) can be overcome by metabolic labelling with amino acids containing stable isotopes (e.g. 13C and/or 15N) in methods such as Stable Isotope Labelling with Amino acids in Cell culture (SILAC). SILAC has also been used for labelling of proteins in plant cell cultures (1) but is not suitable for whole plant labelling. Plants are usually autotrophic (fixing carbon from atmospheric CO2) and, thus, labelling with carbon isotopes becomes impractical. In addition, SILAC is expensive. Recently, Arabidopsis cell cultures were labelled with 15N in a medium containing nitrate as sole nitrogen source. This was shown to be suitable for quantifying proteins and nitrogen-containing metabolites from this cell culture (2,3). Labelling whole plants, however, offers the advantage of studying quantitatively the response to stimulation or disease of a whole multicellular organism or multi-organism systems at the molecular level. Furthermore, plant metabolism enables the use of inexpensive labelling media without introducing additional stress to the organism. And finally, hydroponics is ideal to undertake metabolic labelling under extremely well-controlled conditions. We demonstrate the suitability of metabolic 15N hydroponic isotope labelling of entire plants (HILEP) for relative quantitative proteomic analysis by mass spectrometry. To evaluate this methodology, Arabidopsis plants were grown hydroponically in 14N and 15N media and subjected to oxidative stress.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent research in multi-agent systems incorporate fault tolerance concepts, but does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely 'Intelligent Agents'. A task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The feasibility of the approach is validated by implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent research in multi-agent systems incorporate fault tolerance concepts. However, the research does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely ‘Intelligent Agents’. In the approach considered a task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The agents hence contribute towards fault tolerance and towards building reliable systems. The feasibility of the approach is validated by simulations on an FPGA using a multi-agent simulator and implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One of the important goals of the intelligent buildings especially in commercial applications is not only to minimize the energy consumption but also to enhance the occupant’s comfort. However, most of current development in the intelligent buildings focuses on an implementation of the automatic building control systems that can support energy efficiency approach. The consideration of occupants’ preferences is not adequate. To improve occupant’s wellbeing and energy efficiency in intelligent environments, we develop four types of agent combined together to form a multi-agent system to control the intelligent buildings. Users’ preferential conflicts are discussed. Furthermore, a negotiation mechanism for conflict resolution, has been proposed in order to reach an agreement, and has been represented in syntax directed translation schemes for future implementation and testing. Keywords: conflict resolution, intelligent buildings, multi-agent systems (MAS), negotiation strategy, syntax directed translation schemes (SDTS).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

GPR (Ground Penetrating Radar) results are shown for perpendicular broadside and parallel broadside antenna orientations. Performance in detection and localization of concrete tubes and steel tanks is compared as a function of acquisition configuration. The comparison is done using 100 MHz and 200 MHz center frequency antennas. All tubes and tanks are buried at the geophysical test site of IAG/USP in Sao Paulo city, Brazil. The results show that the long steel pipe with a 38-mm diameter was well detected with the perpendicular broadside configuration. The concrete tubes were better detected with the parallel broadside configuration, clearly showing hyperbolic diffraction events from all targets up to 2-m depth. Steel tanks were detected with the two configurations. However, the parallel broadside configuration was generated to a much lesser extent an apparent hyperbolic reflection corresponding to constructive interference of diffraction hyperbolas of adjacent targets placed at the same depth. Vertical concrete tubes and steel tanks were better contained with parallel broadside antennas, where the apexes of the diffraction hyperbolas better corresponded to the horizontal location of the buried target disposition. The two configurations provide details about buried targets emphasizing how GPR multi-component configurations have the potential to improve the subsurface image quality as well as to discriminate different buried targets. It is judged that they hold some applicability in geotechnical and geoscientific studies. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Anisotropy of thermal stresses in confined dusty plasmas is considered. It is shown that in a multi-component low-temperature plasma containing electrons, ions and dust, the complicated dependence of the ion viscosity on ion temperature gradients leads to a plasma equilibrium state with anisotropic pressure. This pressure anisotropy can be of the order of the ion pressure in some limiting cases, in which the ion Larmor radius or the ion mean free path are of the order of the characteristic length of the plasma nonuniformity. For a sufficiently large dust number density, they contribute to the plasma pressure anisotropy and to its spatial dependence. Currently, it is not yet clear whether this equilibrium state is stable or not. Under these conditions, some convective plasma flows can arise in confinement devices. Therefore, this question needs special consideration.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose an approach to the quantum-mechanical description of relativistic orientable objects. It generalizes Wigner`s ideas concerning the treatment of nonrelativistic orientable objects (in particular, a nonrelativistic rotator) with the help of two reference frames (space-fixed and body-fixed). A technical realization of this generalization (for instance, in 3+1 dimensions) amounts to introducing wave functions that depend on elements of the Poincar, group G. A complete set of transformations that test the symmetries of an orientable object and of the embedding space belongs to the group I =GxG. All such transformations can be studied by considering a generalized regular representation of G in the space of scalar functions on the group, f(x,z), that depend on the Minkowski space points xaG/Spin(3,1) as well as on the orientation variables given by the elements z of a matrix ZaSpin(3,1). In particular, the field f(x,z) is a generating function of the usual spin-tensor multi-component fields. In the theory under consideration, there are four different types of spinors, and an orientable object is characterized by ten quantum numbers. We study the corresponding relativistic wave equations and their symmetry properties.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The reconstruction of Extensive Air Showers (EAS) observed by particle detectors at the ground is based on the characteristics of observables like the lateral particle density and the arrival times. The lateral densities, inferred for different EAS components from detector data, are usually parameterised by applying various lateral distribution functions (LDFs). The LDFs are used in turn for evaluating quantities like the total number of particles or the density at particular radial distances. Typical expressions for LDFs anticipate azimuthal symmetry of the density around the shower axis. The deviations of the lateral particle density from this assumption arising from various reasons are smoothed out in the case of compact arrays like KASCADE, but not in the case of arrays like Grande, which only sample a smaller part of the azimuthal variation. KASCADE-Grande, an extension of the former KASCADE experiment, is a multi-component Extensive Air Shower (EAS) experiment located at the Karlsruhe Institute of Technology (Campus North), Germany. The lateral distributions of charged particles are deduced from the basic information provided by the Grande scintillators - the energy deposits - first in the observation plane, then in the intrinsic shower plane. In all steps azimuthal dependences should be taken into account. As the energy deposit in the scintillators is dependent on the angles of incidence of the particles, azimuthal dependences are already involved in the first step: the conversion from the energy deposits to the charged particle density. This is done by using the Lateral Energy Correction Function (LECF) that evaluates the mean energy deposited by a charged particle taking into account the contribution of other particles (e.g. photons) to the energy deposit. By using a very fast procedure for the evaluation of the energy deposited by various particles we prepared realistic LECFs depending on the angle of incidence of the shower and on the radial and azimuthal coordinates of the location of the detector. Mapping the lateral density from the observation plane onto the intrinsic shower plane does not remove the azimuthal dependences arising from geometric and attenuation effects, in particular for inclined showers. Realistic procedures for applying correction factors are developed. Specific examples of the bias due to neglecting the azimuthal asymmetries in the conversion from the energy deposit in the Grande detectors to the lateral density of charged particles in the intrinsic shower plane are given. (C) 2011 Elsevier B.V. All rights reserved.