936 resultados para Architectures profondes
Resumo:
Cycloaddition reactions have been employed in polymer synthesis since the mid-nineteen sixties. This critical review will highlight recent notable advances in this field. For example, [2 + 2] cycloaddition reactions have been utilized in numerous polymerizations to enable the construction of strained polymer systems such as poly(2-azetidinone)s that can, in turn, afford polyfunctional beta-amino acid derived polymers. Polymers have also been synthesized successfully via (3 + 2) cycloaddition methods utilizing both thermal and high-pressure conditions. 'Click chemistry'-a process involving the reaction of azides with olefins, has also been adopted to generate linear and hyperbranched polymer architectures in a very efficient manner. [4 + 2] Cycloadditions have also been utilized under thermal and high-pressure conditions to produce rigid polymers such as polyimides and polyphenylenes. These cycloaddition polymerization methods afford polymers with potential for use in high performance polymers applications such as high temperature resistant coatings and polymeric organic light emitting diodes.
Resumo:
The interactions of bovine serum albumin (BSA) with three ethylene oxide/butylene oxide (E/B) copolymers having different block lengths and varying molecular architectures is examined in this study in aqueous solutions. Dynamic light scattering (DLS) indicates the absence of BSA-polymer binding in micellar systems of copolymers with lengthy hydrophilic blocks. On the contrary, stable protein-polyrner aggregates were observed in the case of E18B10 block copolymer. Results from DLS and SAXS suggest the dissociation of E/B copolymer micelles in the presence of protein and the absorption of polymer chains to BSA surface. At high protein loadings, bound BSA adopts a more compact conformation in solution. The secondary structure of the protein remains essentially unaffected even at high polymer concentrations. Raman spectroscopy was used to give insight to the configurations of the bound molecules in concentrated solutions. In the vicinity of the critical gel concentration of E18B10 introduction of BSA can dramatically modify the phase diagram, inducing a gel-sol-gel transition. The overall picture of the interaction diagram of the E18B10-BSA reflects the shrinkage of the suspended particles due to destabilization of micelles induced by BSA and the gelator nature of the globular protein. SAXS and rheology were used to further characterize the structure and flow behavior of the polymer-protein hybrid gels and sols.
Resumo:
The effect of hyperbranched macromolecular architectures (dendrimers) upon chirality has received significant attention in recent years in the light of the proposal of amplification of chirality. In particular, several studies have been carried out on the chiroptical properties of dendrimers that contain a chiral core and achiral branches in order to determine if the chirality of the central core can be transmitted to the distal. region of the macromolecule. In addition to interest of a pure academic nature, the presence of such chiral conformational order would be extremely useful in the development of asymmetric catalysts. In this paper, a novel class of chiral dendrimers is described - these perfect hyperbranched macromolecules have been prepared by a convergent route by the coupling of a chiral central core based upon tris(2-aminoethyl)amine and poly(aromatic amide ester) dendritic branches. The chiral properties of these dendrimers have been investigated by detailed optical rotation studies and circular dichroism analysis; the results of these studies are described herein. (C) Wiley-VCH Verlag GmbH Co.
Resumo:
Syntactic theory provides a rich array of representational assumptions about linguistic knowledge and processes. Such detailed and independently motivated constraints on grammatical knowledge ought to play a role in sentence comprehension. However most grammar-based explanations of processing difficulty in the literature have attempted to use grammatical representations and processes per se to explain processing difficulty. They did not take into account that the description of higher cognition in mind and brain encompasses two levels: on the one hand, at the macrolevel, symbolic computation is performed, and on the other hand, at the microlevel, computation is achieved through processes within a dynamical system. One critical question is therefore how linguistic theory and dynamical systems can be unified to provide an explanation for processing effects. Here, we present such a unification for a particular account to syntactic theory: namely a parser for Stabler's Minimalist Grammars, in the framework of Smolensky's Integrated Connectionist/Symbolic architectures. In simulations we demonstrate that the connectionist minimalist parser produces predictions which mirror global empirical findings from psycholinguistic research.
Resumo:
The design space of emerging heterogenous multi-core architectures with re-configurability element makes it feasible to design mixed fine-grained and coarse-grained parallel architectures. This paper presents a hierarchical composite array design which extends the curret design space of regular array design by combining a sequence of transformations. This technique is applied to derive a new design of a pipelined parallel regular array with different dataflow between phases of computation.
Resumo:
The performance benefit when using Grid systems comes from different strategies, among which partitioning the applications into parallel tasks is the most important. However, in most cases the enhancement coming from partitioning is smoothed by the effect of the synchronization overhead, mainly due to the high variability of completion times of the different tasks, which, in turn, is due to the large heterogeneity of Grid nodes. For this reason, it is important to have models which capture the performance of such systems. In this paper we describe a queueing-network-based performance model able to accurately analyze Grid architectures, and we use the model to study a real parallel application executed in a Grid. The proposed model improves the classical modelling techniques and highlights the impact of resource heterogeneity and network latency on the application performance.
Resumo:
This paper presents the development of an autonomous surveillance UAV that competed in the Ministry of Defence Grand Challenge 2008. In order to focus on higher-level mission control, the UAV is built upon an existing commercially available stabilised R/C helicopter platform. The hardware architecture is developed to allow for non-invasion integration with the existing stabilised platform, and to enable to the distributed processing of closed loop control and mission goals. The resulting control system proved highly successful and was capable of flying within 40knott gusts. The software and safety architectures were key to the success of the research and also hold the potential for use in the development of more complex system comprising of multiple UAVs.
Resumo:
The performance benefit when using grid systems comes from different strategies, among which partitioning the applications into parallel tasks is the most important. However, in most cases the enhancement coming from partitioning is smoothed by the effects of synchronization overheads, mainly due to the high variability in the execution times of the different tasks, which, in turn, is accentuated by the large heterogeneity of grid nodes. In this paper we design hierarchical, queuing network performance models able to accurately analyze grid architectures and applications. Thanks to the model results, we introduce a new allocation policy based on a combination between task partitioning and task replication. The models are used to study two real applications and to evaluate the performance benefits obtained with allocation policies based on task replication.
Resumo:
A fundamental principle in practical nonlinear data modeling is the parsimonious principle of constructing the minimal model that explains the training data well. Leave-one-out (LOO) cross validation is often used to estimate generalization errors by choosing amongst different network architectures (M. Stone, "Cross validatory choice and assessment of statistical predictions", J. R. Stast. Soc., Ser. B, 36, pp. 117-147, 1974). Based upon the minimization of LOO criteria of either the mean squares of LOO errors or the LOO misclassification rate respectively, we present two backward elimination algorithms as model post-processing procedures for regression and classification problems. The proposed backward elimination procedures exploit an orthogonalization procedure to enable the orthogonality between the subspace as spanned by the pruned model and the deleted regressor. Subsequently, it is shown that the LOO criteria used in both algorithms can be calculated via some analytic recursive formula, as derived in this contribution, without actually splitting the estimation data set so as to reduce computational expense. Compared to most other model construction methods, the proposed algorithms are advantageous in several aspects; (i) There are no tuning parameters to be optimized through an extra validation data set; (ii) The procedure is fully automatic without an additional stopping criteria; and (iii) The model structure selection is directly based on model generalization performance. The illustrative examples on regression and classification are used to demonstrate that the proposed algorithms are viable post-processing methods to prune a model to gain extra sparsity and improved generalization.
Resumo:
Generalized honeycomb torus is a candidate for interconnection network architectures, which includes honeycomb torus, honeycomb rectangular torus, and honeycomb parallelogramic torus as special cases. Existence of Hamiltonian cycle is a basic requirement for interconnection networks since it helps map a "token ring" parallel algorithm onto the associated network in an efficient way. Cho and Hsu [Inform. Process. Lett. 86 (4) (2003) 185-190] speculated that every generalized honeycomb torus is Hamiltonian. In this paper, we have proved this conjecture. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
We propose a bridge between two important parallel programming paradigms: data parallelism and communicating sequential processes (CSP). Data parallel pipelined architectures obtained with the Alpha language can be embedded in a control intensive application expressed in CSP-based Handel formalism. The interface is formally defined from the semantics of the languages Alpha and Handel. This work will ease the design of compute intensive applications on FPGAs.
Resumo:
We describe a high-level design method to synthesize multi-phase regular arrays. The method is based on deriving component designs using classical regular (or systolic) array synthesis techniques and composing these separately evolved component design into a unified global design. Similarity transformations ar e applied to component designs in the composition stage in order to align data ow between the phases of the computations. Three transformations are considered: rotation, re ection and translation. The technique is aimed at the design of hardware components for high-throughput embedded systems applications and we demonstrate this by deriving a multi-phase regular array for the 2-D DCT algorithm which is widely used in many vide ocommunications applications.
Resumo:
As integrated software solutions reshape project delivery, they alter the bases for collaboration and competition across firms in complex industries. This paper synthesises and extends literatures on strategy in project-based industries and digitally-integrated work to understand how project-based firms interact with digital infrastructures for project delivery. Four identified strategies are to: 1) develop and use capabilities to shape the integrated software solutions that are used in projects; 2) co-specialize, developing complementary assets to work repeatedly with a particular integrator firm; 3) retain flexibility by developing and maintaining capabilities in multiple digital technologies and processes; and 4) manage interfaces, translating work into project formats for coordination while hiding proprietary data and capabilities in internal systems. The paper articulates the strategic importance of digital infrastructures for delivery as well as product architectures. It concludes by discussing managerial implications of the identified strategies and areas for further research.
Resumo:
This paper presents a queue-based agent architecture for multimodal interfaces. Using a novel approach to intelligently organise both agents and input data, this system has the potential to outperform current state-of-the-art multimodal systems, while at the same time allowing greater levels of interaction and flexibility. This assertion is supported by simulation test results showing that significant improvements can be obtained over normal sequential agent scheduling architectures. For real usage, this translates into faster, more comprehensive systems, without the limited application domain that restricts current implementations.