902 resultados para Application method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new method to determine mesospheric electron densities from partially reflected medium frequency radar pulses. The technique uses an optimal estimation inverse method and retrieves both an electron density profile and a gradient electron density profile. As well as accounting for the absorption of the two magnetoionic modes formed by ionospheric birefringence of each radar pulse, the forward model of the retrieval parameterises possible Fresnel scatter of each mode by fine electronic structure, phase changes of each mode due to Faraday rotation and the dependence of the amplitudes of the backscattered modes upon pulse width. Validation results indicate that known profiles can be retrieved and that χ2 tests upon retrieval parameters satisfy validity criteria. Application to measurements shows that retrieved electron density profiles are consistent with accepted ideas about seasonal variability of electron densities and their dependence upon nitric oxide production and transport.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we explore classification techniques for ill-posed problems. Two classes are linearly separable in some Hilbert space X if they can be separated by a hyperplane. We investigate stable separability, i.e. the case where we have a positive distance between two separating hyperplanes. When the data in the space Y is generated by a compact operator A applied to the system states ∈ X, we will show that in general we do not obtain stable separability in Y even if the problem in X is stably separable. In particular, we show this for the case where a nonlinear classification is generated from a non-convergent family of linear classes in X. We apply our results to the problem of quality control of fuel cells where we classify fuel cells according to their efficiency. We can potentially classify a fuel cell using either some external measured magnetic field or some internal current. However we cannot measure the current directly since we cannot access the fuel cell in operation. The first possibility is to apply discrimination techniques directly to the measured magnetic fields. The second approach first reconstructs currents and then carries out the classification on the current distributions. We show that both approaches need regularization and that the regularized classifications are not equivalent in general. Finally, we investigate a widely used linear classification algorithm Fisher's linear discriminant with respect to its ill-posedness when applied to data generated via a compact integral operator. We show that the method cannot stay stable when the number of measurement points becomes large.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

P>To address whether seasonal variability exists among Shiga toxin-encoding bacteriophage (Stx phage) numbers on a cattle farm, conventional plaque assay was performed on water samples collected over a 17 month period. Distinct seasonal variation in bacteriophage numbers was evident, peaking between June and August. Removal of cattle from the pasture precipitated a reduction in bacteriophage numbers, and during the winter months, no bacteriophage infecting Escherichia coli were detected, a surprising occurrence considering that 1031 tailed-bacteriophages are estimated to populate the globe. To address this discrepancy a culture-independent method based on quantitative PCR was developed. Primers targeting the Q gene and stx genes were designed that accurately and discriminately quantified artificial mixed lambdoid bacteriophage populations. Application of these primer sets to water samples possessing no detectable phages by plaque assay, demonstrated that the number of lambdoid bacteriophage ranged from 4.7 x 104 to 6.5 x 106 ml-1, with one in 103 free lambdoid bacteriophages carrying a Shiga toxin operon (stx). Specific molecular biological tools and discriminatory gene targets have enabled virus populations in the natural environment to be enumerated and similar strategies could replace existing propagation-dependent techniques, which grossly underestimate the abundance of viral entities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Liquid clouds play a profound role in the global radiation budget but it is difficult to remotely retrieve their vertical profile. Ordinary narrow field-of-view (FOV) lidars receive a strong return from such clouds but the information is limited to the first few optical depths. Wideangle multiple-FOV lidars can isolate radiation scattered multiple times before returning to the instrument, often penetrating much deeper into the cloud than the singly-scattered signal. These returns potentially contain information on the vertical profile of extinction coefficient, but are challenging to interpret due to the lack of a fast radiative transfer model for simulating them. This paper describes a variational algorithm that incorporates a fast forward model based on the time-dependent two-stream approximation, and its adjoint. Application of the algorithm to simulated data from a hypothetical airborne three-FOV lidar with a maximum footprint width of 600m suggests that this approach should be able to retrieve the extinction structure down to an optical depth of around 6, and total opticaldepth up to at least 35, depending on the maximum lidar FOV. The convergence behavior of Gauss-Newton and quasi-Newton optimization schemes are compared. We then present results from an application of the algorithm to observations of stratocumulus by the 8-FOV airborne “THOR” lidar. It is demonstrated how the averaging kernel can be used to diagnose the effective vertical resolution of the retrieved profile, and therefore the depth to which information on the vertical structure can be recovered. This work enables exploitation of returns from spaceborne lidar and radar subject to multiple scattering more rigorously than previously possible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the global market potential of solar thermal, photovoltaic (PV) and combined photovoltaic/thermal (PV/T) technologies in current time and near future was discussed. The concept of the PV/T and the theory behind the PV/T operation were briefly introduced, and standards for evaluating technical, economic and environmental performance of the PV/T systems were addressed. A comprehensive literature review into R&D works and practical application of the PV/T technology was illustrated and the review results were critically analysed in terms of PV/T type and research methodology used. The major features, current status, research focuses and existing difficulties/barriers related to the various types of PV/T were identified. The research methods, including theoretical analyses and computer simulation, experimental and combined experimental/theoretical investigation, demonstration and feasibility study, as well as economic and environmental analyses, applied into the PV/T technology were individually discussed, and the achievement and problems remaining in each research method category were described. Finally, opportunities for further work to carry on PV/T study were identified. The review research indicated that air/water-based PV/T systems are the commonly used technologies but their thermal removal effectiveness is lower. Refrigerant/heat-pipe-based PV/Ts, although still in research/laboratory stage, could achieve much higher solar conversion efficiencies over the air/water-based systems. However, these systems were found a few technical challenges in practice which require further resolutions. The review research suggested that further works could be undertaken to (1) develop new feasible, economic and energy efficient PV/T systems; (2) optimise the structural/geometrical configurations of the existing PV/T systems; (3) study long term dynamic performance of the PV/T systems; (4) demonstrate the PV/T systems in real buildings and conduct the feasibility study; and (5) carry on advanced economic and environmental analyses. This review research helps finding the questions remaining in PV/T technology, identify new research topics/directions to further improve the performance of the PV/T, remove the barriers in PV/T practical application, establish the standards/regulations related to PV/T design and installation, and promote its market penetration throughout the world.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reaction of salicylaldehyde semicarbazone (L-1), 2-hydroxyacetophenone semicarbazone (L-2), and 2-hydroxynaphthaldehyde semicarbazone (L-3) with [Pd(PPh3)(2)Cl-2] in ethanol in the presence of a base (NEt3) affords a family of yellow complexes (1a, 1b and 1c, respectively). In these complexes the semicarbazone ligands are coordinated to palladium in a rather unusual tridentate ONN-mode, and a PPh3 also remains coordinated to the metal center. Crystal structures of the 1b and 1c complexes have been determined, and structure of 1a has been optimized by a DFT method. In these complexes two potential donor sites of the coordinated semicarbazone, viz. the hydrazinic nitrogen and carbonylic oxygen, remain unutilized. Further reaction of these palladium complexes (1a, 1b and 1c) with [Ru(PPh3)(2)(CO)(2)Cl-2] yields a family of orange complexes (2a, 2b and 2c, respectively). In these heterodinuclear (Pd-Ru) complexes, the hydrazinic nitrogen (via dissociation of the N-H proton) and the carbonylic oxygen from the palladium-containing fragment bind to the ruthenium center by displacing a chloride and a carbonyl. Crystal structures of 2a and 2c have been determined, and the structure of 2b has been optimized by a DFT method. All the complexes show characteristic H-1 NMR spectra and, intense absorptions in the visible and ultraviolet region. Cyclic voltammetry on all the complexes shows an irreversible oxidation of the coordinated semicarbazone within 0.86-0.93 V vs. SCE, and an irreversible reduction of the same ligand within -0.96 to -1.14 V vs. SCE. Both the mononuclear (1a, 1b and 1c) and heterodinuclear (2a, 2b and 2c) complexes are found to efficiently catalyze Suzuki, Heck and Sonogashira type C-C coupling reactions utilizing a variety of aryl bromides and aryl chlorides. The Pd-Ru complexes (2a, 2b and 2c) are found to be better catalysts than the Pd complexes (1a, 1b and 1c) for Suzuki and Heck coupling reactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: This paper aims to design an evaluation method that enables an organization to assess its current IT landscape and provide readiness assessment prior to Software as a Service (SaaS) adoption. Design/methodology/approach: The research employs a mixed of quantitative and qualitative approaches for conducting an IT application assessment. Quantitative data such as end user’s feedback on the IT applications contribute to the technical impact on efficiency and productivity. Qualitative data such as business domain, business services and IT application cost drivers are used to determine the business value of the IT applications in an organization. Findings: The assessment of IT applications leads to decisions on suitability of each IT application that can be migrated to cloud environment. Research limitations/implications: The evaluation of how a particular IT application impacts on a business service is done based on the logical interpretation. Data mining method is suggested in order to derive the patterns of the IT application capabilities. Practical implications: This method has been applied in a local council in UK. This helps the council to decide the future status of the IT applications for cost saving purpose.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Normal Quantile Transform (NQT) has been used in many hydrological and meteorological applications in order to make the Cumulated Distribution Function (CDF) of the observed, simulated and forecast river discharge, water level or precipitation data Gaussian. It is also the heart of the meta-Gaussian model for assessing the total predictive uncertainty of the Hydrological Uncertainty Processor (HUP) developed by Krzysztofowicz. In the field of geo-statistics this transformation is better known as the Normal-Score Transform. In this paper some possible problems caused by small sample sizes when applying the NQT in flood forecasting systems will be discussed and a novel way to solve the problem will be outlined by combining extreme value analysis and non-parametric regression methods. The method will be illustrated by examples of hydrological stream-flow forecasts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work proposes a method to objectively determine the most suitable analogue redesign method for forward type converters under digital voltage mode control. Particular emphasis is placed on determining the method which allows the highest phase margin at the particular switching and crossover frequencies chosen by the designer. It is shown that at high crossover frequencies with respect to switching frequency, controllers designed using backward integration have the largest phase margin; whereas at low crossover frequencies with respect to switching frequency, controllers designed using bilinear integration have the largest phase margins. An accurate model of the power stage is used for simulation, and experimental results from a Buck converter are collected. The performance of the digital controllers is compared to that of the equivalent analogue controller both in simulation and experiment. Excellent correlation between the simulation and experimental results is presented. This work will allow designers to confidently choose the analogue redesign method which yields the greater phase margin for their application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many applications, such as intermittent data assimilation, lead to a recursive application of Bayesian inference within a Monte Carlo context. Popular data assimilation algorithms include sequential Monte Carlo methods and ensemble Kalman filters (EnKFs). These methods differ in the way Bayesian inference is implemented. Sequential Monte Carlo methods rely on importance sampling combined with a resampling step, while EnKFs utilize a linear transformation of Monte Carlo samples based on the classic Kalman filter. While EnKFs have proven to be quite robust even for small ensemble sizes, they are not consistent since their derivation relies on a linear regression ansatz. In this paper, we propose another transform method, which does not rely on any a priori assumptions on the underlying prior and posterior distributions. The new method is based on solving an optimal transportation problem for discrete random variables. © 2013, Society for Industrial and Applied Mathematics

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information technology has become heavily embedded in business operations. As business needs change over time, IT applications are expected to continue providing required support. Whether the existing IT applications are still fit for the business purpose they were intended or new IT applications should be introduced, is a strategic decision for business, IT and business-aligned IT. In this paper, we present a method which aims to analyse business functions and IT roles, and to evaluate business-aligned IT from both social and technical perspectives. The method introduces a set of techniques that systematically supports the evaluation of the existing IT applications in relation to their technical capabilities for maximising business value. Furthermore, we discuss the evaluation process and results which are illustrated and validated through a real-life case study of a UK borough council, and followed by discussion on implications for researchers and practitioners.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of technology obsolescence in information intensive businesses (software and hardware no longer being supported and replaced by improved and different solutions) and a cost constrained market can severely increase costs and operational, and ultimately reputation risk. Although many businesses recognise technological obsolescence, the pervasive nature of technology often means they have little information to identify the risk and location of pending obsolescence and little money to apply to the solution. This paper presents a low cost structured method to identify obsolete software and the risk of their obsolescence where the structure of a business and its supporting IT resources can be captured, modelled, analysed and the risk to the business of technology obsolescence identified to enable remedial action using qualified obsolescence information. The technique is based on a structured modelling approach using enterprise architecture models and a heatmap algorithm to highlight high risk obsolescent elements. The method has been tested and applied in practice in three consulting studies carried out by Capgemini involving four UK police forces. However the generic technique could be applied to any industry based on plans to improve it using ontology framework methods. This paper contains details of enterprise architecture meta-models and related modelling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Performance modelling is a useful tool in the lifeycle of high performance scientific software, such as weather and climate models, especially as a means of ensuring efficient use of available computing resources. In particular, sufficiently accurate performance prediction could reduce the effort and experimental computer time required when porting and optimising a climate model to a new machine. In this paper, traditional techniques are used to predict the computation time of a simple shallow water model which is illustrative of the computation (and communication) involved in climate models. These models are compared with real execution data gathered on AMD Opteron-based systems, including several phases of the U.K. academic community HPC resource, HECToR. Some success is had in relating source code to achieved performance for the K10 series of Opterons, but the method is found to be inadequate for the next-generation Interlagos processor. The experience leads to the investigation of a data-driven application benchmarking approach to performance modelling. Results for an early version of the approach are presented using the shallow model as an example.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When studying hydrological processes with a numerical model, global sensitivity analysis (GSA) is essential if one is to understand the impact of model parameters and model formulation on results. However, different definitions of sensitivity can lead to a difference in the ranking of importance of the different model factors. Here we combine a fuzzy performance function with different methods of calculating global sensitivity to perform a multi-method global sensitivity analysis (MMGSA). We use an application of a finite element subsurface flow model (ESTEL-2D) on a flood inundation event on a floodplain of the River Severn to illustrate this new methodology. We demonstrate the utility of the method for model understanding and show how the prediction of state variables, such as Darcian velocity vectors, can be affected by such a MMGSA. This paper is a first attempt to use GSA with a numerically intensive hydrological model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We extend extreme learning machine (ELM) classifiers to complex Reproducing Kernel Hilbert Spaces (RKHS) where the input/output variables as well as the optimization variables are complex-valued. A new family of classifiers, called complex-valued ELM (CELM) suitable for complex-valued multiple-input–multiple-output processing is introduced. In the proposed method, the associated Lagrangian is computed using induced RKHS kernels, adopting a Wirtinger calculus approach formulated as a constrained optimization problem similarly to the conventional ELM classifier formulation. When training the CELM, the Karush–Khun–Tuker (KKT) theorem is used to solve the dual optimization problem that consists of satisfying simultaneously smallest training error as well as smallest norm of output weights criteria. The proposed formulation also addresses aspects of quaternary classification within a Clifford algebra context. For 2D complex-valued inputs, user-defined complex-coupled hyper-planes divide the classifier input space into four partitions. For 3D complex-valued inputs, the formulation generates three pairs of complex-coupled hyper-planes through orthogonal projections. The six hyper-planes then divide the 3D space into eight partitions. It is shown that the CELM problem formulation is equivalent to solving six real-valued ELM tasks, which are induced by projecting the chosen complex kernel across the different user-defined coordinate planes. A classification example of powdered samples on the basis of their terahertz spectral signatures is used to demonstrate the advantages of the CELM classifiers compared to their SVM counterparts. The proposed classifiers retain the advantages of their ELM counterparts, in that they can perform multiclass classification with lower computational complexity than SVM classifiers. Furthermore, because of their ability to perform classification tasks fast, the proposed formulations are of interest to real-time applications.