975 resultados para Efficient Solution


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A series of novel cyclometalated iridium(III) complexes bearing 2,4-diphenylquinoline ligands with fluorinated substituent were prepared and characterized by elemental analysis, NMR and mass spectroscopy. The cyclic voltammetry, absorption, emission and electroluminescent properties of these complexes were systematically investigated. Electrochemical studies showed that the oxidation of the fluorinated complexes occurred at more positive potentials (in the range 0.57-0.69 V) than the unfluorinated complex 1 (0.42 V). In view of the energy level, the lowering of the LUMO by fluorination is significantly less than that of the HOMO. The weak and low energies absorption bands in the range of 300-600 nm are well resolved, likely associated with MLCT and (3)pi-pi* transitions. These complexes show strong orange red emission both in the solution and solid state. The emission maxima of the fluorinated complexes showed blue shift by 9, 24 and 15 nm for 2, 3 and 4, respectively, with respect to the unfluorinated analogous 1. Multilayered organic light-emitting diodes (OLEDs) were fabricated by using the complexes as dopant materials. Significantly higher performance and lower turn-on voltage were achieved using the fluorinated complexes as the emitter than that using the unfluorinated counterpart 1 under the same doping level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Phycoerythrins have been widely used in food, cosmetics., immunodiagnostics and analytical reagents. An efficient one-step chromatography method for purification of R-phycoerythrins from Polysiphonia urceolata was described in this paper. Pure R-phycoerythrin was obtained with an absorbance ratio A(565)/A(280) of 5.6 and a high recovery yield of 67-33%, using a DEAE-Sepharose Fast Flow chromatography with a gradient elution of pH, alternative to common gradient elution of ionic strength. The absorption spectrum of R-phycoerythrin was characterized with three absorbance maxima at 565, 539 and 498 mum, respectively and the fluorescence emission spectrum at room temperature was measured to be 580nm. The results of native-PAGE. and SDS-PAGE showed no contamination by other proteins in the phycoerythrin solution. which suggests an efficient method for the separation and purification of R-phycoerythrins from Polysiphonia urceolata. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Alignment is a prevalent approach for recognizing 3D objects in 2D images. A major problem with current implementations is how to robustly handle errors that propagate from uncertainties in the locations of image features. This thesis gives a technique for bounding these errors. The technique makes use of a new solution to the problem of recovering 3D pose from three matching point pairs under weak-perspective projection. Furthermore, the error bounds are used to demonstrate that using line segments for features instead of points significantly reduces the false positive rate, to the extent that alignment can remain reliable even in cluttered scenes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A methodology for improved power controller switching in mobile Body Area Networks operating within the ambient healthcare environment is proposed. The work extends Anti-windup and Bumpless transfer results to provide a solution to the ambulatory networking problem that ensures sufficient biometric data can always be regenerated at the base station. The solution thereby guarantees satisfactory quality of service for healthcare providers. Compensation is provided for the nonlinear hardware constraints that are a typical feature of the type of network under consideration and graceful performance degradation in the face of hardware output power saturation is demonstrated, thus conserving network energy in an optimal fashion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent years have witnessed a rapid growth in the demand for streaming video over the Internet, exposing challenges in coping with heterogeneous device capabilities and varying network throughput. When we couple this rise in streaming with the growing number of portable devices (smart phones, tablets, laptops) we see an ever-increasing demand for high-definition videos online while on the move. Wireless networks are inherently characterised by restricted shared bandwidth and relatively high error loss rates, thus presenting a challenge for the efficient delivery of high quality video. Additionally, mobile devices can support/demand a range of video resolutions and qualities. This demand for mobile streaming highlights the need for adaptive video streaming schemes that can adjust to available bandwidth and heterogeneity, and can provide us with graceful changes in video quality, all while respecting our viewing satisfaction. In this context the use of well-known scalable media streaming techniques, commonly known as scalable coding, is an attractive solution and the focus of this thesis. In this thesis we investigate the transmission of existing scalable video models over a lossy network and determine how the variation in viewable quality is affected by packet loss. This work focuses on leveraging the benefits of scalable media, while reducing the effects of data loss on achievable video quality. The overall approach is focused on the strategic packetisation of the underlying scalable video and how to best utilise error resiliency to maximise viewable quality. In particular, we examine the manner in which scalable video is packetised for transmission over lossy networks and propose new techniques that reduce the impact of packet loss on scalable video by selectively choosing how to packetise the data and which data to transmit. We also exploit redundancy techniques, such as error resiliency, to enhance the stream quality by ensuring a smooth play-out with fewer changes in achievable video quality. The contributions of this thesis are in the creation of new segmentation and encapsulation techniques which increase the viewable quality of existing scalable models by fragmenting and re-allocating the video sub-streams based on user requirements, available bandwidth and variations in loss rates. We offer new packetisation techniques which reduce the effects of packet loss on viewable quality by leveraging the increase in the number of frames per group of pictures (GOP) and by providing equality of data in every packet transmitted per GOP. These provide novel mechanisms for packetizing and error resiliency, as well as providing new applications for existing techniques such as Interleaving and Priority Encoded Transmission. We also introduce three new scalable coding models, which offer a balance between transmission cost and the consistency of viewable quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes ways in which emergence engineering principles can be applied to the development of distributed applications. A distributed solution to the graph-colouring problem is used as a vehicle to illustrate some novel techniques. Each node acts autonomously to colour itself based only on its local view of its neighbourhood, and following a simple set of carefully tuned rules. Randomness breaks symmetry and thus enhances stability. The algorithm has been developed to enable self-configuration in wireless sensor networks, and to reflect real-world configurations the algorithm operates with 3 dimensional topologies (reflecting the propagation of radio waves and the placement of sensors in buildings, bridge structures etc.). The algorithm’s performance is evaluated and results presented. It is shown to be simultaneously highly stable and scalable whilst achieving low convergence times. The use of eavesdropping gives rise to low interaction complexity and high efficiency in terms of the communication overheads.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

How the CNS deals with the issue of motor redundancy remains a central question for motor control research. Here we investigate the means by which neuromuscular and biomechanical factors interact to resolve motor redundancy in rhythmic multijoint arm movements. We used a two-df motorised robot arm to manipulate the dynamics of rhythmic flexion-extension (FE) and supination-pronation (SP) movements at the elbow-joint complex. Participants were required to produce rhythmic FE and SP movements, either in isolation, or in combination (at the phase relationship of their choice), while we recorded the activity of key bi-functional muscles. When performed in combination, most participants spontaneously produced an in-phase pattern of coordination in which flexion is synchronised with supination. The activity of the Biceps Brachii (BB), the strongest arm muscle which also has the largest moment arms in both flexion and supination was significantly higher for FE and SP performed in combination than in isolation, suggesting optimal exploitation of the mechanical advantage of this muscle. In a separate condition, participants were required to produce a rhythmic SP movement while a rhythmic FE movement was imposed by the motorised robot. Simulations based upon a musculoskeletal model of the arm demonstrated that in this context, the most efficient use of the force-velocity relationship of BB requires that an anti-phase pattern of coordination (flexion synchronized with pronation) be produced. In practice, the participants maintained the in-phase behavior, and BB activity was higher than for SP performed in isolation. This finding suggests that the neural organisation underlying the exploitation of bifunctional muscle properties, in the natural context, constrains the system to maintain the

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a feature selection method for data classification, which combines a model-based variable selection technique and a fast two-stage subset selection algorithm. The relationship between a specified (and complete) set of candidate features and the class label is modelled using a non-linear full regression model which is linear-in-the-parameters. The performance of a sub-model measured by the sum of the squared-errors (SSE) is used to score the informativeness of the subset of features involved in the sub-model. The two-stage subset selection algorithm approaches a solution sub-model with the SSE being locally minimized. The features involved in the solution sub-model are selected as inputs to support vector machines (SVMs) for classification. The memory requirement of this algorithm is independent of the number of training patterns. This property makes this method suitable for applications executed in mobile devices where physical RAM memory is very limited. An application was developed for activity recognition, which implements the proposed feature selection algorithm and an SVM training procedure. Experiments are carried out with the application running on a PDA for human activity recognition using accelerometer data. A comparison with an information gain based feature selection method demonstrates the effectiveness and efficiency of the proposed algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many scientific applications are programmed using hybrid programming models that use both message passing and shared memory, due to the increasing prevalence of large-scale systems with multicore, multisocket nodes. Previous work has shown that energy efficiency can be improved using software-controlled execution schemes that consider both the programming model and the power-aware execution capabilities of the system. However, such approaches have focused on identifying optimal resource utilization for one programming model, either shared memory or message passing, in isolation. The potential solution space, thus the challenge, increases substantially when optimizing hybrid models since the possible resource configurations increase exponentially. Nonetheless, with the accelerating adoption of hybrid programming models, we increasingly need improved energy efficiency in hybrid parallel applications on large-scale systems. In this work, we present new software-controlled execution schemes that consider the effects of dynamic concurrency throttling (DCT) and dynamic voltage and frequency scaling (DVFS) in the context of hybrid programming models. Specifically, we present predictive models and novel algorithms based on statistical analysis that anticipate application power and time requirements under different concurrency and frequency configurations. We apply our models and methods to the NPB MZ benchmarks and selected applications from the ASC Sequoia codes. Overall, we achieve substantial energy savings (8.74 percent on average and up to 13.8 percent) with some performance gain (up to 7.5 percent) or negligible performance loss.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The aim of this work was to determine if volumetric modulated arc therapy (VMAT) plans, created for constant dose-rate (cdrVMAT) delivery are a viable alternative to step and shoot five-field intensity modulated radiation therapy (IMRT). Materials and methods: The cdrVMAT plans, inverse planned on a treatment planning system with no solution to account for couch top or rails, were created for delivery on a linear accelerator with no variable dose rate control system. A series of five-field IMRT and cdrVMAT plans were created using dual partial arcs (gantry rotating between 260° and 100°) with 4° control points for ten prostate patients with the average rectal constraint incrementally increased. Pareto fronts were compared for the planning target volume homogeneity and average rectal dose between the two techniques for each patient. Also investigated were tumour control probability and normal tissue complication probability values for each technique. The delivery parameters [monitor units (MU) and time] and delivery accuracy of the IMRT and VMAT plans were also compared. Results: Pareto fronts showed that the dual partial arc plans were superior to the five-field IMRT plans, particularly for the clinically acceptable plans where average rectal doses were less for rotational plans (p = 0·009) with no statistical difference in target homogeneity. The cdrVMAT plans had significantly more MU (p = 0·005) but the average delivery time was significantly less than the IMRT plans by 42%. All clinically acceptable cdrVMAT plans were accurate in their delivery (gamma 99·2 ± 1·1%, 3%3 mm criteria). Conclusions Accurate delivery of dual partial arc cdrVMAT avoiding the couch top and rails has been demonstrated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two models that can predict the voltage-dependent scattering from liquid crystal (LC)-based reflectarray cells are presented. The validity of both numerical techniques is demonstrated using measured results in the frequency range 94-110 GHz. The most rigorous approach models, for each voltage, the inhomogeneous and anisotropic permittivity of the LC as a stratified media in the direction of the biasing field. This accounts for the different tilt angles of the LC molecules inside the cell calculated from the solution of the elastic problem. The other model is based on an effective homogeneous permittivity tensor that corresponds to the average tilt angle along the longitudinal direction for each biasing voltage. In this model, convergence problems associated with the longitudinal inhomogeneity are avoided, and the computation efficiency is improved. Both models provide a correspondence between the reflection coefficient (losses and phase-shift) of the LC-based reflectarray cell and the value of biasing voltage, which can be used to design beam scanning reflectarrays. The accuracy and the efficiency of both models are also analyzed and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The scale of BT's operations necessitates the use of very large scale computing systems, and the storage and management of large volumes of data. Customer product portfolios are an important form of data which can be difficult to store in a space efficient way. The difficulties arise from the inherently structured form of product portfolios, and the fact that they change over time as customers add or remove products. This paper introduces a new data-modelling abstraction called the List_Tree. It has been designed specifically to support the efficient storage and manipulation of customer product portfolios, but may also prove useful in other applications with similar general requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aluminum complex Alq(3) (q = 8-hydroxyquinolinate), which has important applications in organic light-emitting diode materials, is shown to be readily synthesized as a pure phase under solvent-free mechanochemical conditions from Al(OAc)(2)OH and 8-hydroxyquinoline by ball milling. The initial product of the mechanochemical synthesis is a novel acetic acid solvate of Alq(3), and the alpha polymorph of Alq(3) is obtained on subsequent heating/desolvation of this phase. The structure of the mechanochemically prepared acetic acid solvate of Alq(3) has been determined directly from powder X-ray diffraction data and is shown to be a different polymorph from the corresponding acetic acid solvate prepared by solution-state crystallization of Alq(3) from acetic acid. Significantly, the mechanochemical synthesis of Alq(3) is shown to be fully scalable across two orders of magnitude from 0.5 to 50 g scale. The Alq(3) sample obtained from the solvent-free mechanochemical synthesis is analytically pure and exhibits identical photoluminescence behavior to that of a sample prepared by the conventional synthetic route.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a Bayesian learning setting, the posterior distribution of a predictive model arises from a trade-off between its prior distribution and the conditional likelihood of observed data. Such distribution functions usually rely on additional hyperparameters which need to be tuned in order to achieve optimum predictive performance; this operation can be efficiently performed in an Empirical Bayes fashion by maximizing the posterior marginal likelihood of the observed data. Since the score function of this optimization problem is in general characterized by the presence of local optima, it is necessary to resort to global optimization strategies, which require a large number of function evaluations. Given that the evaluation is usually computationally intensive and badly scaled with respect to the dataset size, the maximum number of observations that can be treated simultaneously is quite limited. In this paper, we consider the case of hyperparameter tuning in Gaussian process regression. A straightforward implementation of the posterior log-likelihood for this model requires O(N^3) operations for every iteration of the optimization procedure, where N is the number of examples in the input dataset. We derive a novel set of identities that allow, after an initial overhead of O(N^3), the evaluation of the score function, as well as the Jacobian and Hessian matrices, in O(N) operations. We prove how the proposed identities, that follow from the eigendecomposition of the kernel matrix, yield a reduction of several orders of magnitude in the computation time for the hyperparameter optimization problem. Notably, the proposed solution provides computational advantages even with respect to state of the art approximations that rely on sparse kernel matrices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a hardware solution for network flow processing at full line rate. Advanced memory architecture using DDR3 SDRAMs is proposed to cope with the flow match limitations in packet throughput, number of supported flows and number of packet header fields (or tuples) supported for flow identifications. The described architecture has been prototyped for accommodating 8 million flows, and tested on an FPGA platform achieving a minimum of 70 million lookups per second. This is sufficient to process internet traffic flows at 40 Gigabit Ethernet.