877 resultados para new method
Resumo:
HIV/AIDS is one of the most destructive epidemics in ever recorded history claims an estimated 2.4 –3.3 million lives every year. Even though there is no treatment for this pandemic Elisa and Western Blot tests are the only tests currently available for detecting HIV/AIDS. This article proposes a new method of detecting HIV/AIDS based on the measurement of the dielectric properties of blood at the microwave frequencies. The measurements were made at the S-band of microwave frequency using rectangular cavity perturbation technique with the samples of blood from healthy donors as well as from HIV/AIDS patients. An appreciable change is observed in the dielectric properties of patient samples than with the normal healthy samples and these measurements were in good agreement with clinical results. This measurement is an alternative in vitro method of diagnosing HIV/AIDS using microwaves.
Resumo:
The work presented in this paper belongs to the power quality knowledge area and deals with the voltage sags in power transmission and distribution systems. Propagating throughout the power network, voltage sags can cause plenty of problems for domestic and industrial loads that can financially cost a lot. To impose penalties to responsible party and to improve monitoring and mitigation strategies, sags must be located in the power network. With such a worthwhile objective, this paper comes up with a new method for associating a sag waveform with its origin in transmission and distribution networks. It solves this problem through developing hybrid methods which hire multiway principal component analysis (MPCA) as a dimension reduction tool. MPCA reexpresses sag waveforms in a new subspace just in a few scores. We train some well-known classifiers with these scores and exploit them for classification of future sags. The capabilities of the proposed method for dimension reduction and classification are examined using the real data gathered from three substations in Catalonia, Spain. The obtained classification rates certify the goodness and powerfulness of the developed hybrid methods as brand-new tools for sag classification
Resumo:
The time-of-detection method for aural avian point counts is a new method of estimating abundance, allowing for uncertain probability of detection. The method has been specifically designed to allow for variation in singing rates of birds. It involves dividing the time interval of the point count into several subintervals and recording the detection history of the subintervals when each bird sings. The method can be viewed as generating data equivalent to closed capture–recapture information. The method is different from the distance and multiple-observer methods in that it is not required that all the birds sing during the point count. As this method is new and there is some concern as to how well individual birds can be followed, we carried out a field test of the method using simulated known populations of singing birds, using a laptop computer to send signals to audio stations distributed around a point. The system mimics actual aural avian point counts, but also allows us to know the size and spatial distribution of the populations we are sampling. Fifty 8-min point counts (broken into four 2-min intervals) using eight species of birds were simulated. Singing rate of an individual bird of a species was simulated following a Markovian process (singing bouts followed by periods of silence), which we felt was more realistic than a truly random process. The main emphasis of our paper is to compare results from species singing at (high and low) homogenous rates per interval with those singing at (high and low) heterogeneous rates. Population size was estimated accurately for the species simulated, with a high homogeneous probability of singing. Populations of simulated species with lower but homogeneous singing probabilities were somewhat underestimated. Populations of species simulated with heterogeneous singing probabilities were substantially underestimated. Underestimation was caused by both the very low detection probabilities of all distant individuals and by individuals with low singing rates also having very low detection probabilities.
Resumo:
There is a recent interest to use inorganic-based magnetic nanoparticles as a vehicle to carry biomolecules for various biophysical applications, but direct attachment of the molecules is known to alter their conformation leading to attenuation in activity. In addition, surface immobilization has been limited to monolayer coverage. It is shown that alternate depositions of negatively charged protein molecules, typically bovine serum albumin (BSA) with a positively charged aminocarbohydrate template such as glycol chitosan (GC) on magnetic iron oxide nanoparticle surface as a colloid, are carried out under pH 7.4. Circular dichroism (CD) clearly reveals that the secondary structure of the entrapped BSA sequential depositions in this manner remains totally unaltered which is in sharp contrast to previous attempts. Probing the binding properties of the entrapped BSA using small molecules (Site I and Site II drug compounds) confirms for the first time the full retention of its biological activity as compared with native BSA, which also implies the ready accessibility of the entrapped protein molecules through the porous overlayers. This work clearly suggests a new method to immobilize and store protein molecules beyond monolayer adsorption on a magnetic nanoparticle surface without much structural alteration. This may find applications in magnetic recoverable enzymes or protein delivery.
Resumo:
Stable isotopic characterization of chlorine in chlorinated aliphatic pollution is potentially very valuable for risk assessment and monitoring remediation or natural attenuation. The approach has been underused because of the complexity of analysis and the time it takes. We have developed a new method that eliminates sample preparation. Gas chromatography produces individually eluted sample peaks for analysis. The He carrier gas is mixed with Ar and introduced directly into the torch of a multicollector ICPMS. The MC-ICPMS is run at a high mass resolution of >= 10 000 to eliminate interference of mass 37 ArH with Cl. The standardization approach is similar to that for continuous flow stable isotope analysis in which sample and reference materials are measured successively. We have measured PCE relative to a laboratory TCE standard mixed with the sample. Solvent samples of 200 nmol to 1.3 mu mol ( 24- 165 mu g of Cl) were measured. The PCE gave the same value relative to the TCE as measured by the conventional method with a precision of 0.12% ( 2 x standard error) but poorer precision for the smaller samples.
Resumo:
Here we describe a novel, inexpensive and simple method for preserving RNA that reduces handling stress in aquatic invertebrates following ecotoxicogenomic experimentation. The application of the method is based on transcriptomic experiments conducted on Daphnia magna, but may easily be applied on a range of other aquatic organisms of a particular size with e.g. amphipod Gammarus pulex representing an upper size limit. We explain in detail how to apply this new method, named the "Cylindrical Sieve (CS) system", and highlight its advantages and disadvantages.
Resumo:
Templated sol-gel encapsulation of surfactant-stabilised micelles containing metal precursor(s) with ultra-thin porous silica coating allows solvent extraction of organic based stabiliser from the composites in colloidal state hence a new method of preparing supported alloy catalysts using the inorganic silica-stabilised nano-sized, homogenously mixed, silver - platinum (Ag-Pt) colloidal particles is reported.
Resumo:
The level set method is commonly used to address image noise removal. Existing studies concentrate mainly on determining the speed function of the evolution equation. Based on the idea of a Canny operator, this letter introduces a new method of controlling the level set evolution, in which the edge strength is taken into account in choosing curvature flows for the speed function and the normal to edge direction is used to orient the diffusion of the moving interface. The addition of an energy term to penalize the irregularity allows for better preservation of local edge information. In contrast with previous Canny-based level set methods that usually adopt a two-stage framework, the proposed algorithm can execute all the above operations in one process during noise removal.
Resumo:
A novel algorithm for solving nonlinear discrete time optimal control problems with model-reality differences is presented. The technique uses dynamic integrated system optimisation and parameter estimation (DISOPE) which achieves the correct optimal solution in spite of deficiencies in the mathematical model employed in the optimisation procedure. A new method for approximating some Jacobian trajectories required by the algorithm is introduced. It is shown that the iterative procedure associated with the algorithm naturally suits applications to batch chemical processes.
Resumo:
We present a new method to determine mesospheric electron densities from partially reflected medium frequency radar pulses. The technique uses an optimal estimation inverse method and retrieves both an electron density profile and a gradient electron density profile. As well as accounting for the absorption of the two magnetoionic modes formed by ionospheric birefringence of each radar pulse, the forward model of the retrieval parameterises possible Fresnel scatter of each mode by fine electronic structure, phase changes of each mode due to Faraday rotation and the dependence of the amplitudes of the backscattered modes upon pulse width. Validation results indicate that known profiles can be retrieved and that χ2 tests upon retrieval parameters satisfy validity criteria. Application to measurements shows that retrieved electron density profiles are consistent with accepted ideas about seasonal variability of electron densities and their dependence upon nitric oxide production and transport.
Resumo:
Volume determination of tephra deposits is necessary for the assessment of the dynamics and hazards of explosive volcanoes. Several methods have been proposed during the past 40 years that include the analysis of crystal concentration of large pumices, integrations of various thinning relationships, and the inversion of field observations using analytical and computational models. Regardless of their strong dependence on tephra-deposit exposure and distribution of isomass/isopach contours, empirical integrations of deposit thinning trends still represent the most widely adopted strategy due to their practical and fast application. The most recent methods involve the best fitting of thinning data using various exponential seg- ments or a power-law curve on semilog plots of thickness (or mass/area) versus square root of isopach area. The exponential method is mainly sensitive to the number and the choice of straight segments, whereas the power-law method can better reproduce the natural thinning of tephra deposits but is strongly sensitive to the proximal or distal extreme of integration. We analyze a large data set of tephra deposits and propose a new empirical method for the deter- mination of tephra-deposit volumes that is based on the integration of the Weibull function. The new method shows a better agreement with observed data, reconciling the debate on the use of the exponential versus power-law method. In fact, the Weibull best fitting only depends on three free parameters, can well reproduce the gradual thinning of tephra deposits, and does not depend on the choice of arbitrary segments or of arbitrary extremes of integration.
Resumo:
We introduce a new algorithm for source identification and field splitting based on the point source method (Potthast 1998 A point-source method for inverse acoustic and electromagnetic obstacle scattering problems IMA J. Appl. Math. 61 119–40, Potthast R 1996 A fast new method to solve inverse scattering problems Inverse Problems 12 731–42). The task is to separate the sound fields uj, j = 1, ..., n of sound sources supported in different bounded domains G1, ..., Gn in from measurements of the field on some microphone array—mathematically speaking from the knowledge of the sum of the fields u = u1 + + un on some open subset Λ of a plane. The main idea of the scheme is to calculate filter functions , to construct uℓ for ℓ = 1, ..., n from u|Λ in the form We will provide the complete mathematical theory for the field splitting via the point source method. In particular, we describe uniqueness, solvability of the problem and convergence and stability of the algorithm. In the second part we describe the practical realization of the splitting for real data measurements carried out at the Institute for Sound and Vibration Research at Southampton, UK. A practical demonstration of the original recording and the splitting results for real data is available online.
Resumo:
A new technique for objective classification of boundary layers is applied to ground-based vertically pointing Doppler lidar and sonic anemometer data. The observed boundary layer has been classified into nine different types based on those in the Met Office ‘Lock’ scheme, using vertical velocity variance and skewness, along with attenuated backscatter coefficient and surface sensible heat flux. This new probabilistic method has been applied to three years of data from Chilbolton Observatory in southern England and a climatology of boundary-layer type has been created. A clear diurnal cycle is present in all seasons. The most common boundary-layer type is stable with no cloud (30.0% of the dataset). The most common unstable type is well mixed with no cloud (15.4%). Decoupled stratocumulus is the third most common boundary-layer type (10.3%) and cumulus under stratocumulus occurs 1.0% of the time. The occurrence of stable boundary-layer types is much higher in the winter than the summer and boundary-layer types capped with cumulus cloud are more prevalent in the warm seasons. The most common diurnal evolution of boundary-layer types, occurring on 52 days of our three-year dataset, is that of no cloud with the stability changing from stable to unstable during daylight hours. These results are based on 16393 hours, 62.4% of the three-year dataset, of diagnosed boundary-layer type. This new method is ideally suited to long-term evaluation of boundary-layer type parametrisations in weather forecast and climate models.
Resumo:
Many applications, such as intermittent data assimilation, lead to a recursive application of Bayesian inference within a Monte Carlo context. Popular data assimilation algorithms include sequential Monte Carlo methods and ensemble Kalman filters (EnKFs). These methods differ in the way Bayesian inference is implemented. Sequential Monte Carlo methods rely on importance sampling combined with a resampling step, while EnKFs utilize a linear transformation of Monte Carlo samples based on the classic Kalman filter. While EnKFs have proven to be quite robust even for small ensemble sizes, they are not consistent since their derivation relies on a linear regression ansatz. In this paper, we propose another transform method, which does not rely on any a priori assumptions on the underlying prior and posterior distributions. The new method is based on solving an optimal transportation problem for discrete random variables. © 2013, Society for Industrial and Applied Mathematics
Resumo:
We present a novel method for retrieving high-resolution, three-dimensional (3-D) nonprecipitating cloud fields in both overcast and broken-cloud situations. The method uses scanning cloud radar and multiwavelength zenith radiances to obtain gridded 3-D liquid water content (LWC) and effective radius (re) and 2-D column mean droplet number concentration (Nd). By using an adaption of the ensemble Kalman filter, radiances are used to constrain the optical properties of the clouds using a forward model that employs full 3-D radiative transfer while also providing full error statistics given the uncertainty in the observations. To evaluate the new method, we first perform retrievals using synthetic measurements from a challenging cumulus cloud field produced by a large-eddy simulation snapshot. Uncertainty due to measurement error in overhead clouds is estimated at 20% in LWC and 6% in re, but the true error can be greater due to uncertainties in the assumed droplet size distribution and radiative transfer. Over the entire domain, LWC and re are retrieved with average error 0.05–0.08 g m-3 and ~2 μm, respectively, depending on the number of radiance channels used. The method is then evaluated using real data from the Atmospheric Radiation Measurement program Mobile Facility at the Azores. Two case studies are considered, one stratocumulus and one cumulus. Where available, the liquid water path retrieved directly above the observation site was found to be in good agreement with independent values obtained from microwave radiometer measurements, with an error of 20 g m-2.