14 resultados para Return-based pricing kernel

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The design cycle for complex special-purpose computing systems is extremely costly and time-consuming. It involves a multiparametric design space exploration for optimization, followed by design verification. Designers of special purpose VLSI implementations often need to explore parameters, such as optimal bitwidth and data representation, through time-consuming Monte Carlo simulations. A prominent example of this simulation-based exploration process is the design of decoders for error correcting systems, such as the Low-Density Parity-Check (LDPC) codes adopted by modern communication standards, which involves thousands of Monte Carlo runs for each design point. Currently, high-performance computing offers a wide set of acceleration options that range from multicore CPUs to Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). The exploitation of diverse target architectures is typically associated with developing multiple code versions, often using distinct programming paradigms. In this context, we evaluate the concept of retargeting a single OpenCL program to multiple platforms, thereby significantly reducing design time. A single OpenCL-based parallel kernel is used without modifications or code tuning on multicore CPUs, GPUs, and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL in order to introduce FPGAs as a potential platform to efficiently execute simulations coded in OpenCL. We use LDPC decoding simulations as a case study. Experimental results were obtained by testing a variety of regular and irregular LDPC codes that range from short/medium (e.g., 8,000 bit) to long length (e.g., 64,800 bit) DVB-S2 codes. We observe that, depending on the design parameters to be simulated, on the dimension and phase of the design, the GPU or FPGA may suit different purposes more conveniently, thus providing different acceleration factors over conventional multicore CPUs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We consider the local order estimation of nonlinear autoregressive systems with exogenous inputs (NARX), which may have different local dimensions at different points. By minimizing the kernel-based local information criterion introduced in this paper, the strongly consistent estimates for the local orders of the NARX system at points of interest are obtained. The modification of the criterion and a simple procedure of searching the minimum of the criterion, are also discussed. The theoretical results derived here are tested by simulation examples.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, we consider the variable selection problem for a nonlinear non-parametric system. Two approaches are proposed, one top-down approach and one bottom-up approach. The top-down algorithm selects a variable by detecting if the corresponding partial derivative is zero or not at the point of interest. The algorithm is shown to have not only the parameter but also the set convergence. This is critical because the variable selection problem is binary, a variable is either selected or not selected. The bottom-up approach is based on the forward/backward stepwise selection which is designed to work if the data length is limited. Both approaches determine the most important variables locally and allow the unknown non-parametric nonlinear system to have different local dimensions at different points of interest. Further, two potential applications along with numerical simulations are provided to illustrate the usefulness of the proposed algorithms.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, a novel and effective lip-based biometric identification approach with the Discrete Hidden Markov Model Kernel (DHMMK) is developed. Lips are described by shape features (both geometrical and sequential) on two different grid layouts: rectangular and polar. These features are then specifically modeled by a DHMMK, and learnt by a support vector machine classifier. Our experiments are carried out in a ten-fold cross validation fashion on three different datasets, GPDS-ULPGC Face Dataset, PIE Face Dataset and RaFD Face Dataset. Results show that our approach has achieved an average classification accuracy of 99.8%, 97.13%, and 98.10%, using only two training images per class, on these three datasets, respectively. Our comparative studies further show that the DHMMK achieved a 53% improvement against the baseline HMM approach. The comparative ROC curves also confirm the efficacy of the proposed lip contour based biometrics learned by DHMMK. We also show that the performance of linear and RBF SVM is comparable under the frame work of DHMMK.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A simple method for the selection of the appropriate choice of surface-mounted loading resistor required for a thin radar absorber based on a high-impedance surface (HIS) principle is demonstrated. The absorber consists of a HIS, (artificial magnetic ground plane), thickness 0.03 lambda(0) surface-loaded resistive-elements interconnecting a textured surface of square patches. The properties of absorber are characterized under normal incident using a parallel plate waveguide measurement technique over the operating frequency range of 2.6-3.95 GHz. We show that for this arrangement return loss and bandwidth are insensitive to +/- 2% tolerance variations in surface resistor values about the value predicted using the method elaborated in this letter, and that better than -28 dB at 3.125 GHz reflection loss can be obtained with an effective working bandwidth of up to 11% at -10 dB reflection loss. (C) 2009 Wiley Periodicals, Inc. Microwave Opt Technol Lett 51: 1733-1775, 2009; Published online in Wiley Interscience (www.interscience.wiley.com). DOI 10.1002/mop.24454

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper studies the dynamic pricing problem of selling fixed stock of perishable items over a finite horizon, where the decision maker does not have the necessary historic data to estimate the distribution of uncertain demand, but has imprecise information about the quantity demand. We model this uncertainty using fuzzy variables. The dynamic pricing problem based on credibility theory is formulated using three fuzzy programming models, viz.: the fuzzy expected revenue maximization model, a-optimistic revenue maximization model, and credibility maximization model. Fuzzy simulations for functions with fuzzy parameters are given and embedded into a genetic algorithm to design a hybrid intelligent algorithm to solve these three models. Finally, a real-world example is presented to highlight the effectiveness of the developed model and algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose the return-to-cost-ratio (RCR) as an alternative approach to the analysis of operational eco-efficiency of companies based on the notion of opportunity costs. RCR helps to overcome two fundamental deficits of existing approaches to eco-efficiency. (1) It translates eco-efficiency into managerial terms by applying the well-established notion of opportunity costs to eco-efficiency analysis. (2) RCR allows to identify and quantify the drivers behind changes in corporate eco-efficiency. RCR is applied to the analysis of the CO2-efficiency of German companies in order to illustrate its usefulness for a detailed analysis of changes in corporate eco-efficiency as well as for the development of effective environmental strategies. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The theoretical concept of ‘social capital’ has been increasingly invoked in connection to religion by academics, policy makers, charities and Faith Based Organisations (FBOs). Drawing on the popularisation of the term by Robert Putnam, many in these groups have hailed the religious as one of the most productive generators of social capital in today’s societies. In this article, we examine this claim through ethnographic material relating to Faithworks, a national ‘movement’ of Christians who provide welfare services within their communities. We claim that to apply the term ‘social capital’ in a meaningful sociological manner to FBOs requires a return to Pierre Bourdieu’s use of the term in order to refuse to extricate it from the practices in which it is enmeshed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increasingly infrastructure providers are supplying the cloud marketplace with storage and on-demand compute resources to host cloud applications. From an application user's point of view, it is desirable to identify the most appropriate set of available resources on which to execute an application. Resource choice can be complex and may involve comparing available hardware specifications, operating systems, value-added services, such as network configuration or data replication, and operating costs, such as hosting cost and data throughput. Providers' cost models often change and new commodity cost models, such as spot pricing, have been introduced to offer significant savings. In this paper, a software abstraction layer is used to discover infrastructure resources for a particular application, across multiple providers, by using a two-phase constraints-based approach. In the first phase, a set of possible infrastructure resources are identified for a given application. In the second phase, a heuristic is used to select the most appropriate resources from the initial set. For some applications a cost-based heuristic is most appropriate; for others a performance-based heuristic may be used. A financial services application and a high performance computing application are used to illustrate the execution of the proposed resource discovery mechanism. The experimental result shows the proposed model could dynamically select an appropriate set of resouces that match the application's requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Massively parallel networks of highly efficient, high performance Single Instruction Multiple Data (SIMD) processors have been shown to enable FPGA-based implementation of real-time signal processing applications with performance and
cost comparable to dedicated hardware architectures. This is achieved by exploiting simple datapath units with deep processing pipelines. However, these architectures are highly susceptible to pipeline bubbles resulting from data and control hazards; the only way to mitigate against these is manual interleaving of
application tasks on each datapath, since no suitable automated interleaving approach exists. In this paper we describe a new automated integrated mapping/scheduling approach to map algorithm tasks to processors and a new low-complexity list scheduling technique to generate the interleaved schedules. When applied to a spatial Fixed-Complexity Sphere Decoding (FSD) detector
for next-generation Multiple-Input Multiple-Output (MIMO) systems, the resulting schedules achieve real-time performance for IEEE 802.11n systems on a network of 16-way SIMD processors on FPGA, enable better performance/complexity balance than current approaches and produce results comparable to handcrafted implementations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Film adaptations of superhero comic-books offer a particularly rich case study to analyse narrative strategies of contemporary Hollywood cinema. The serial structures adopted by the comics they are based on, as well as their use of the spectacular potential of the image, provide a successful model for current audiovisual productions. Without completely abandoning classical techniques, these adaptations try to find a new balance between narrative and digital phantasmagoria. This paper discusses some significant examples of this genre, including adaptations of classical DC and Marvel franchises and more recent series, as well as other comic-book influenced films such as The Matrix and Unbreakable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we consider charging strategies that mitigate the impact of domestic charging of EVs on low-voltage distribution networks and which seek to reduce peak power by responding to time-ofday pricing. The strategies are based on the distributed Additive Increase and Multiplicative Decrease (AIMD) charging algorithms proposed in [5]. The strategies are evaluated using simulations conducted on a custom OpenDSS-Matlab platform for a typical low voltage residential feeder network. Results show that by using AIMD based smart charging 50% EV penetration can be accommodated on our test network, compared to only 10% with uncontrolled charging, without needing to reinforce existing network infrastructure. © Springer-Verlag Berlin Heidelberg 2013.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MarcoPolo-R is a sample return mission to a primitive Near-Earth Asteroid (NEA) proposed in collaboration with NASA. It will rendezvous with a primitive NEA, scientifically characterize it at multiple scales,and return a unique sample to Earth unaltered by the atmospheric entry process or terrestrial weathering. MarcoPolo-R will return bulk samples (up to 2 kg) from an organic-rich binary asteroid to Earth for laboratory analyses, allowing us to: explore the origin of planetary materials and initial stages of habitable planet formation; identify and characterize the organics and volatiles in a primitive asteroid; understand the unique geomorphology, dynamics and evolution of a binaryNEA. This project is based on the previous Marco Polo mission study,which was selected for the Assessment Phase of the first round of Cosmic Vision. Its scientific rationale was highly ranked by ESA committees andit was not selected only because the estimated cost was higher than theallotted amount for an M class mission. The cost of Marco Polo-R will be reduced to within the ESA medium mission budget by collaboration withAPL (John Hopkins University) and JPL in the NASA program for coordination with ESA's Cosmic Vision Call. The baseline target is a binary asteroid (175706) 1996 FG3, which offers a very efficient operational and technical mission profile. A binary target also providesenhanced science return. The choice of this target will allow newinvestigations to be performed more easily than at a single object, andalso enables investigations of the fascinating geology and geophysics ofasteroids that are impossible at a single object. Several launch windows have been identified in the time-span 2020-2024. A number of otherpossible primitive single targets of high scientific interest have beenidentified covering a wide range of possible launch dates. The baselinemission scenario of Marco Polo-R to 1996 FG3 is as follows: a singleprimary spacecraft provided by ESA, carrying the Earth Re-entry Capsule, sample acquisition and transfer system provided by NASA, will be launched by a Soyuz-Fregat rocket from Kourou into GTO and using two space segment stages. Two similar missions with two launch windows, in 2021 and 2022 and for both sample return in 2029 (with mission durationof 7 and 8 years), have been defined. Earlier or later launches, in 2020 or 2024, also offer good opportunities. All manoeuvres are carried out by a chemical propulsion system. MarcoPolo-R takes advantage of three industrial studies completed as part of the previous Marco Polo mission (see ESA/SRE (2009)3, Marco Polo Yellow Book) and of the expertise of the consortium led by Dr. A.F. Cheng (PI of the NASA NEAR Shoemaker mission) of the JHU-APL, including JPL, NASA ARC, NASA LaRC, and MIT.