927 resultados para implementation method
Resumo:
As a promising method for pattern recognition and function estimation, least squares support vector machines (LS-SVM) express the training in terms of solving a linear system instead of a quadratic programming problem as for conventional support vector machines (SVM). In this paper, by using the information provided by the equality constraint, we transform the minimization problem with a single equality constraint in LS-SVM into an unconstrained minimization problem, then propose reduced formulations for LS-SVM. By introducing this transformation, the times of using conjugate gradient (CG) method, which is a greatly time-consuming step in obtaining the numerical solution, are reduced to one instead of two as proposed by Suykens et al. (1999). The comparison on computational speed of our method with the CG method proposed by Suykens et al. and the first order and second order SMO methods on several benchmark data sets shows a reduction of training time by up to 44%. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Massively parallel networks of highly efficient, high performance Single Instruction Multiple Data (SIMD) processors have been shown to enable FPGA-based implementation of real-time signal processing applications with performance and
cost comparable to dedicated hardware architectures. This is achieved by exploiting simple datapath units with deep processing pipelines. However, these architectures are highly susceptible to pipeline bubbles resulting from data and control hazards; the only way to mitigate against these is manual interleaving of
application tasks on each datapath, since no suitable automated interleaving approach exists. In this paper we describe a new automated integrated mapping/scheduling approach to map algorithm tasks to processors and a new low-complexity list scheduling technique to generate the interleaved schedules. When applied to a spatial Fixed-Complexity Sphere Decoding (FSD) detector
for next-generation Multiple-Input Multiple-Output (MIMO) systems, the resulting schedules achieve real-time performance for IEEE 802.11n systems on a network of 16-way SIMD processors on FPGA, enable better performance/complexity balance than current approaches and produce results comparable to handcrafted implementations.
Resumo:
Turbocompounding is the process of recovering a proportion of an engine’s fuel energy that would otherwise be lost in the exhaust process and adding it to the output power. This was first seen in the 1930s and is carried out by coupling an exhaust gas turbine to the crankshaft of a reciprocating engine. It has since been recognised that coupling the power turbine to an electrical generator instead of the crankshaft has the potential to reduce the fuel consumption further with the added flexibility of being able to decide how this recovered energy is used. The electricity generated can be used in automotive applications to assist the crankshaft using a flywheel motor generator or to power ancillaries that would otherwise have run off the crankshaft. In the case of stationary power plants, it can assist the electrical power output. Decoupling the power turbine from the crankshaft and coupling it to a generator allows the power electronics to control the turbine speed independently in order to optimise the specific fuel consumption for different engine operating conditions. This method of energy recapture is termed ‘turbogenerating’.
This paper gives a brief history of turbocompounding and its thermodynamic merits. It then moves on to give an account of the validation of a turbogenerated engine model. The model is then used to investigate what needs to be done to an engine when a turbogenerator is installed. The engine being modelled is used for stationary power generation and is fuelled by an induced biogas with a small portion of palm oil being injected into the cylinder to initiate combustion by compression ignition. From these investigations, optimum settings were found that result in a 10.90% improvement in overall efficiency. These savings relate to the same engine without a turbogenerator installed operating with fixed fuelling.
Resumo:
A bit level systolic array system is proposed for the Winograd Fourier transform algorithm. The design uses bit-serial arithmetic and, in common with other systolic arrays, features nearest-neighbor interconnections, regularity and high throughput. The short interconnections in this method contrast favorably with the long interconnections between butterflies required in the FFT. The structure is well suited to VLSI implementations. It is demonstrated how long transforms can be implemented with components designed to perform a short length transform. These components build into longer transforms preserving the regularity and structure of the short length transform design.
Resumo:
A bit-level systolic array system is proposed for the Winograd Fourier transform algorithm. The design uses bit-serial arithmetic and, in common with other systolic arrays, features nearest neighbor interconnections, regularity, and high throughput. The short interconnections in this method contrast favorably with the long interconnections between butterflies required in the FFT. The structure is well suited to VLSI implementations. It is demonstrated how long transforms can be implemented with components designed to perform short-length transforms. These components build into longer transforms, preserving the regularity and structure of the short-length transform design.
Resumo:
The fabrication and performance of the first bit-level systolic correlator array is described. The application of systolic array concepts at the bit level provides a simple and extremely powerful method for implementing high-performance digital processing functions. The resulting structure is highly regular, facilitating yield enhancement through fault-tolerant redundancy techniques and therefore ideally suited to implementation as a VLSI chip. The CMOS/SOS chip operates at 35 MHz, is fully cascadable and exhibits 64-stage correlation for 1-bit reference and 4-bit data. 7 refs.
Resumo:
The treatment of the Random-Phase Approximation Hamiltonians, encountered in different frameworks, like time-dependent density functional theory or Bethe-Salpeter equation, is complicated by their non-Hermicity. Compared to their Hermitian Hamiltonian counterparts, computational methods for the treatment of non-Hermitian Hamiltonians are often less efficient and less stable, sometimes leading to the breakdown of the method. Recently [Gruning et al. Nano Lett. 8 (2009) 28201, we have identified that such Hamiltonians are usually pseudo-Hermitian. Exploiting this property, we have implemented an algorithm of the Lanczos type for Random-Phase Approximation Hamiltonians that benefits from the same stability and computational load as its Hermitian counterpart, and applied it to the study of the optical response of carbon nanotubes. We present here the related theoretical grounds and technical details, and study the performance of the algorithm for the calculation of the optical absorption of a molecule within the Bethe-Salpeter equation framework. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
This paper investigates the construction of linear-in-the-parameters (LITP) models for multi-output regression problems. Most existing stepwise forward algorithms choose the regressor terms one by one, each time maximizing the model error reduction ratio. The drawback is that such procedures cannot guarantee a sparse model, especially under highly noisy learning conditions. The main objective of this paper is to improve the sparsity and generalization capability of a model for multi-output regression problems, while reducing the computational complexity. This is achieved by proposing a novel multi-output two-stage locally regularized model construction (MTLRMC) method using the extreme learning machine (ELM). In this new algorithm, the nonlinear parameters in each term, such as the width of the Gaussian function and the power of a polynomial term, are firstly determined by the ELM. An initial multi-output LITP model is then generated according to the termination criteria in the first stage. The significance of each selected regressor is checked and the insignificant ones are replaced at the second stage. The proposed method can produce an optimized compact model by using the regularized parameters. Further, to reduce the computational complexity, a proper regression context is used to allow fast implementation of the proposed method. Simulation results confirm the effectiveness of the proposed technique. © 2013 Elsevier B.V.
Resumo:
Objectives: The Liverpool Care Pathway for the dying patient (LCP) was designed to improve end-of-life care in generalist health care settings. Controversy has led to its withdrawal in some jurisdictions. The main objective of this research was to identify the influences that facilitated or hindered successful LCP implementation.
Method: An organisational case study using realist evaluation in one health and social care trust in Northern Ireland. Two rounds of semi-structured interviews were conducted with two policy makers and twenty two participants with experience and/or involvement in management of the LCP during 2011 and 2012.
Results: Key resource inputs included facilitation with a view to maintaining LCP ‘visibility’, reducing anxiety among nurses and increasing their confidence regarding the delivery of end-of-life care; and nurse and medical education designed to increase professional self-efficacy and reduce misuse and misunderstanding of the LCP. Key enabling contexts were consistent senior management support; ongoing education and training tailored to the needs of each professional group; and an organisational cultural change in the hospital setting that encompassed end-of-life care.
Conclusion: There is a need to appreciate the organizationally complex nature of intervening to improve end-of-life care. Successful implementation of evidence-based interventions for end-of-life care requires commitment to planning, training and ongoing review that takes account of different perspectives, institutional hierarchies and relationships and the educational needs of professional disciplines. There is a need also to recognise that medical consultants require particular support in their role as gatekeepers and as a lead communication channel with patients and their relatives.
Resumo:
Implementation of both design for durability and performance-based standards and specifications are limited by the lack of rapid, simple, science based test methods for characterising the transport properties and deterioration resistance of concrete. This paper presents developments in the application of electrical property measurements as a testing methodology to evaluate the relative performance of a range of concrete mixes. The technique lends itself to in-situ monitoring thereby allowing measurements to be obtained on the as-placed concrete. Conductivity measurements are presented for concretes with and without supplementary cementitious materials (SCM’s) from demoulding up to 350 days. It is shown that electrical conductivity measurements display a continual decrease over the entire test period and attributed to pore structure refinement due to hydration and pozzolanic reaction. The term formation factor is introduced to rank concrete performance in terms of is resistance to chloride penetration.
Resumo:
This paper proposes an efficient learning mechanism to build fuzzy rule-based systems through the construction of sparse least-squares support vector machines (LS-SVMs). In addition to the significantly reduced computational complexity in model training, the resultant LS-SVM-based fuzzy system is sparser while offers satisfactory generalization capability over unseen data. It is well known that the LS-SVMs have their computational advantage over conventional SVMs in the model training process; however, the model sparseness is lost, which is the main drawback of LS-SVMs. This is an open problem for the LS-SVMs. To tackle the nonsparseness issue, a new regression alternative to the Lagrangian solution for the LS-SVM is first presented. A novel efficient learning mechanism is then proposed in this paper to extract a sparse set of support vectors for generating fuzzy IF-THEN rules. This novel mechanism works in a stepwise subset selection manner, including a forward expansion phase and a backward exclusion phase in each selection step. The implementation of the algorithm is computationally very efficient due to the introduction of a few key techniques to avoid the matrix inverse operations to accelerate the training process. The computational efficiency is also confirmed by detailed computational complexity analysis. As a result, the proposed approach is not only able to achieve the sparseness of the resultant LS-SVM-based fuzzy systems but significantly reduces the amount of computational effort in model training as well. Three experimental examples are presented to demonstrate the effectiveness and efficiency of the proposed learning mechanism and the sparseness of the obtained LS-SVM-based fuzzy systems, in comparison with other SVM-based learning techniques.
Resumo:
This paper describes how urban agriculture differs from conventional agriculture not only in the way it engages with the technologies of growing, but also in the choice of crop and the way these are brought to market. The authors propose a new model for understanding these new relationships, which is analogous to a systems view of information technology, namely Hardware-Software- Interface.
The first component of the system is hardware. This is the technological component of the agricultural system. Technology is often thought of as equipment, but its linguistic roots are in ‘technis’ which means ‘know how’. Urban agriculture has to engage new technologies, ones that deal with the scale of operation and its context which is different than rural agriculture. Often the scale is very small, and soils are polluted. There this technology in agriculture could be technical such as aquaponic systems, or could be soil-based agriculture such as allotments, window-boxes, or permaculture. The choice of method does not necessarily determine the crop produced or its efficiency. This is linked to the biotic that is added to the hardware, which is seen as the ‘software’.
The software of the system are the ecological parts of the system. These produce the crop which may or may not be determined by the technology used. For example, a hydroponic system could produce a range of crops, or even fish or edible flowers. Software choice can be driven by ideological preferences such as permaculture, where companion planting is used to reduce disease and pests, or by economic factors such as the local market at a particular time of the year. The monetary value of the ‘software’ is determined by the market. Obviously small, locally produced crops are unlikely to compete against intensive products produced globally, however the value locally might be measured in different ways, and might be sold on a different market. This leads to the final part of the analogy - interface.
The interface is the link between the system and the consumer. In traditional agriculture, there is a tenuous link between the producer of asparagus in Peru and the consumer in Europe. In fact very little of the money spent by the consumer ever reaches the grower. Most of the money is spent on refrigeration, transport and profit for agents and supermarket chains. Local or hyper-local agriculture needs to bypass or circumvent these systems, and be connected more directly to the consumer. This is the interface. In hyper-localised systems effectiveness is often more important than efficiency, and direct links between producer and consumer create new economies.
Resumo:
Several phenomena present in electrical systems motivated the development of comprehensive models based on the theory of fractional calculus (FC). Bearing these ideas in mind, in this work are applied the FC concepts to define, and to evaluate, the electrical potential of fractional order, based in a genetic algorithm optimization scheme. The feasibility and the convergence of the proposed method are evaluated.
Resumo:
The phenomenon of communitas has been described as a moment 'in and out of time' in which a collective of individuals may be experienced by one as equal and individuated in an environment stripped of structural attributes (Turner, 1 969). In these moments, emotional bonds form and an experience of perceived 'oneness' and synergy may be described. As a result of the perceived value of these experiences, it has been suggested by Sharpe (2005) that more clearly understanding how this phenomenon may be purposefully facilitated would be beneficial for leisure service providers. Consequently, the purpose of this research endeavor was to examine the ways in which a particular leisure service provider systematically employs specific methods and sets specific parameters with the intention of guiding participants toward experiences associated with communitas or "shared spirit" as described by the organization. A qualitative case study taking a phenomenological approach was employed in order to capture the depth and complexity of both the phenomenon and the purposefiil negotiation of experiences in guiding participants toward this phenomenon. The means through which these experiences were intentionally facilitated was recreational music making in a group drumming context. As such, an organization which employs specific methods of rhythm circle facilitation as well as trains other facilitators all over the world was chosen purposely for their recognition as the most respectable and credible in this field. The specific facilitator was chosen based on high recommendation by the organization due to her level of experience and expertise. Two rhythm circles were held, and participants were chosen randomly by the facilitator. Data was collected through observation in the first circle and participant- observation in the second, as well as through focus groups with circle participants. Interviews with the facilitator were held both initially to gain broad understanding of concepts and phenomenon as well as after each circle to reflect on each circle specifically. Data was read repeatedly to drawn out patterns which emerged and were coded and organized accordingly. It was found that this specific process or system of implementation lead to experiences associated with communitas by participants. In order to more clearly understand this process and the ways in which experiences associated with communitas manifest as a result of deliberate facilitator actions, these objective facilitator actions were plotted along a continuum relating to subjective participant experiences. These findings were then linked to the literature with regards to specific characteristics of communitas. In so doing, the intentional manifestation of these experiences may be more clearly understood for ftiture facilitators in many contexts. Beyond this, findings summarized important considerations with regards to specific technical and communication competencies which were found to be essential to fostering these experiences for participants within each group. Findings surrounding the maintenance of a fluid negotiation of certain transition points within a group rhythm event overall were also highlighted, and this fluidity was found to be essential to the experience of absorption and engagement in the activity and experience. Emergent themes of structure, control, and consciousness have been presented as they manifested and were found to affect experiences within this study. Discussions surrounding the ethics and authenticity of these particular methods and their implementation has also been generated throughout. In conclusion, there was a breadth as well as depth of knowledge found in unpacking this complex process of guiding individuals toward experiences associated with communitas. The implications of these findings contribute in broadening the current theoretical as well as practical understanding as to how certain intentional parameters may be set and methods employed which may lead to experiences of communitas, and as well contribute a greater knowledge to conceptualizing the manifestation of these experiences when broken down.
Resumo:
The finite element method (FEM) is now developed to solve two-dimensional Hartree-Fock (HF) equations for atoms and diatomic molecules. The method and its implementation is described and results are presented for the atoms Be, Ne and Ar as well as the diatomic molecules LiH, BH, N_2 and CO as examples. Total energies and eigenvalues calculated with the FEM on the HF-level are compared with results obtained with the numerical standard methods used for the solution of the one dimensional HF equations for atoms and for diatomic molecules with the traditional LCAO quantum chemical methods and the newly developed finite difference method on the HF-level. In general the accuracy increases from the LCAO - to the finite difference - to the finite element method.