969 resultados para quantitative analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The software underpinning today’s IT systems needs to adapt dynamically and predictably to rapid changes in system workload, environment and objectives. We describe a software framework that achieves such adaptiveness for IT systems whose components can be modelled as Markov chains. The framework comprises (i) an autonomic architecture that uses Markov-chain quantitative analysis to dynamically adjust the parameters of an IT system in line with its state, environment and objectives; and (ii) a method for developing instances of this architecture for real-world systems. Two case studies are presented that use the framework successfully for the dynamic power management of disk drives, and for the adaptive management of cluster availability within data centres, respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To study the density and cross-sectional area of axons in the optic nerve in elderly control subjects and in cases of Alzheimer's disease (AD) using an image analysis system. Methods: Sections of optic nerves from control and AD patients were stained with toluidine blue to reveal axon profiles. Results: The density of axons was reduced in both the center and peripheral portions of the optic nerve in AD compared with control patients. Analysis of axons with different cross-sectional areas suggested a specific loss of the smaller sized axons in AD, i.e., those with areas less that 1.99 μm2. An analysis of axons >11 μm2 in cross-sectional area suggested no specific loss of the larger axons in this group of patients. Conclusions: The data suggest that image analysis provides an accurate and reproducible method of quantifying axons in the optic nerve. In addition, the data suggest that axons are lost throughout the optic nerve with a specific loss of the smaller-sized axons. Loss of the smaller axons may explain the deficits in color vision observed in a significant proportion of patients with AD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have developed a new technique for extracting histological parameters from multi-spectral images of the ocular fundus. The new method uses a Monte Carlo simulation of the reflectance of the fundus to model how the spectral reflectance of the tissue varies with differing tissue histology. The model is parameterised by the concentrations of the five main absorbers found in the fundus: retinal haemoglobins, choroidal haemoglobins, choroidal melanin, RPE melanin and macular pigment. These parameters are shown to give rise to distinct variations in the tissue colouration. We use the results of the Monte Carlo simulations to construct an inverse model which maps tissue colouration onto the model parameters. This allows the concentration and distribution of the five main absorbers to be determined from suitable multi-spectral images. We propose the use of "image quotients" to allow this information to be extracted from uncalibrated image data. The filters used to acquire the images are selected to ensure a one-to-one mapping between model parameters and image quotients. To recover five model parameters uniquely, images must be acquired in six distinct spectral bands. Theoretical investigations suggest that retinal haemoglobins and macular pigment can be recovered with RMS errors of less than 10%. We present parametric maps showing the variation of these parameters across the posterior pole of the fundus. The results are in agreement with known tissue histology for normal healthy subjects. We also present an early result which suggests that, with further development, the technique could be used to successfully detect retinal haemorrhages.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud computing is a new technological paradigm offering computing infrastructure, software and platforms as a pay-as-you-go, subscription-based service. Many potential customers of cloud services require essential cost assessments to be undertaken before transitioning to the cloud. Current assessment techniques are imprecise as they rely on simplified specifications of resource requirements that fail to account for probabilistic variations in usage. In this paper, we address these problems and propose a new probabilistic pattern modelling (PPM) approach to cloud costing and resource usage verification. Our approach is based on a concise expression of probabilistic resource usage patterns translated to Markov decision processes (MDPs). Key costing and usage queries are identified and expressed in a probabilistic variant of temporal logic and calculated to a high degree of precision using quantitative verification techniques. The PPM cost assessment approach has been implemented as a Java library and validated with a case study and scalability experiments. © 2012 Springer-Verlag Berlin Heidelberg.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantitative analysis of solid-state processes from isothermal microcalorimetric data is straightforward if data for the total process have been recorded and problematic (in the more likely case) when they have not. Data are usually plotted as a function of fraction reacted (α); for calorimetric data, this requires knowledge of the total heat change (Q) upon completion of the process. Determination of Q is difficult in cases where the process is fast (initial data missing) or slow (final data missing). Here we introduce several mathematical methods that allow the direct calculation of Q by selection of data points when only partial data are present, based on analysis with the Pérez-Maqueda model. All methods in addition allow direct determination of the reaction mechanism descriptors m and n and from this the rate constant, k. The validity of the methods is tested with the use of simulated calorimetric data, and we introduce a graphical method for generating solid-state power-time data. The methods are then applied to the crystallization of indomethacin from a glass. All methods correctly recovered the total reaction enthalpy (16.6 J) and suggested that the crystallization followed an Avrami model. The rate constants for crystallization were determined to be 3.98 × 10-6, 4.13 × 10-6, and 3.98 × 10 -6 s-1 with methods 1, 2, and 3, respectively. © 2010 American Chemical Society.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The total time a customer spends in the business process system, called the customer cycle-time, is a major contributor to overall customer satisfaction. Business process analysts and designers are frequently asked to design process solutions with optimal performance. Simulation models have been very popular to quantitatively evaluate the business processes; however, simulation is time-consuming and it also requires extensive modeling experiences to develop simulation models. Moreover, simulation models neither provide recommendations nor yield optimal solutions for business process design. A queueing network model is a good analytical approach toward business process analysis and design, and can provide a useful abstraction of a business process. However, the existing queueing network models were developed based on telephone systems or applied to manufacturing processes in which machine servers dominate the system. In a business process, the servers are usually people. The characteristics of human servers should be taken into account by the queueing model, i.e. specialization and coordination. ^ The research described in this dissertation develops an open queueing network model to do a quick analysis of business processes. Additionally, optimization models are developed to provide optimal business process designs. The queueing network model extends and improves upon existing multi-class open-queueing network models (MOQN) so that the customer flow in the human-server oriented processes can be modeled. The optimization models help business process designers to find the optimal design of a business process with consideration of specialization and coordination. ^ The main findings of the research are, first, parallelization can reduce the cycle-time for those customer classes that require more than one parallel activity; however, the coordination time due to the parallelization overwhelms the savings from parallelization under the high utilization servers since the waiting time significantly increases, thus the cycle-time increases. Third, the level of industrial technology employed by a company and coordination time to mange the tasks have strongest impact on the business process design; as the level of industrial technology employed by the company is high; more division is required to improve the cycle-time; as the coordination time required is high; consolidation is required to improve the cycle-time. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Glass is a common form of trace evidence found at many scenes of crimes in the form of small fragments. These glass fragments can transfer to surrounding objects and/or persons and may provide forensic investigators valuable information to link a suspect to the scene of a crime. Since the elemental composition of different glass sources can be very similar, a highly discriminating technique is required to distinguish between fragments that have originated from different sources. ^ The research presented here demonstrates that Laser Induced Breakdown Spectroscopy (LIBS) is a viable analytical technique for the association and discrimination of glass fragments. The first part of this research describes the optimization of the LIBS experiments including the use of different laser wavelengths to investigate laser-material interaction. The use of a 266 nm excitation laser provided the best analytical figures of merit with minimal damage to the sample. The resulting analytical figures of merit are presented. The second part of this research evaluated the sensitivity of LIBS to associate or discriminate float glass samples originating from the same manufacturing plants and produced at approximately the same time period. Two different sample sets were analyzed ranging in manufacturing dates from days to years apart. Eighteen (18) atomic emission lines corresponding to the elements Sr, K, Fe, Ca, Al, Ba, Na, Mg and Ti, were chosen because of their detection above the method detection limits and for presenting differences between the samples. Ten elemental ratios producing the most discrimination were selected for each set. When all the ratios are combined in a comparison, 99% of the possible pairs were discriminated using the optimized LIBS method generating typical analytical precisions of ∼5% RSD. ^ The final study consisted of the development of a new approach for the use of LIBS as a quantitative analysis of ultra-low volume solution analysis using aerosols and microdrops. Laser induced breakdown spectroscopy demonstrated to be an effective technique for the analysis of as low as 90 pL for microdrop LIBS with 1 pg absolute LOD and 20 µL for aerosol LIBS with an absolute LOD of ∼100 fg.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mainstream electrical stimulation therapies, e.g., spinal cord stimulation (SCS) and deep brain stimulation, use pulse trains that are delivered at rates no higher than 200 Hz. In recent years, stimulation of nerve fibers using kilohertz-frequency (KHF) signals has received increased attention due to the potential to penetrate deeper in the tissue and to the ability to block conduction of action potentials. As well, there are a growing number of clinical applications that use KHF waveforms, including transcutaneous electrical stimulation (TES) for overactive bladder and SCS for chronic pain. However, there is a lack of fundamental understanding of the mechanisms of action of KHF stimulation. The goal of this research was to analyze quantitatively KHF neurostimulation.

We implemented a multilayer volume conductor model of TES including dispersion and capacitive effects, and we validated the model with in vitro measurements in a phantom constructed from dispersive materials. We quantified the effects of frequency on the distribution of potentials and fiber excitation. We also quantified the effects of a novel transdermal amplitude modulated signal (TAMS) consisting of a non-zero offset sinusoidal carrier modulated by a square-pulse train. The model revealed that high-frequency signals generated larger potentials at depth than did low frequencies, but this did not translate into lower stimulation thresholds. Both TAMS and conventional rectangular pulses activated more superficial fibers in addition to the deeper, target fibers, and at no frequency did we observe an inversion of the strength-distance relationship. In addition, we performed in vivo experiments and applied direct stimulation to the sciatic nerve of cats and rats. We measured electromyogram and compound action potential activity evoked by pulses, TAMS and modified versions of TAMS in which we varied the amplitude of the carrier. Nerve fiber activation using TAMS showed no difference with respect to activation with conventional pulse for carrier frequencies of 20 kHz and higher, regardless the size of the carrier. Therefore, TAMS with carrier frequencies >20 kHz does not offer any advantage over conventional pulses, even with larger amplitudes of the carrier, and this has implications for design of waveforms for efficient and effective TES.

We developed a double cable model of a dorsal column (DC) fiber to quantify the responses of DC fibers to a novel KHF-SCS signal. We validated the model using in vivo recordings of the strength-duration relationship and the recovery cycle of single DC fibers. We coupled the fiber model to a model of SCS in human and applied the KHF-SCS signal to quantify thresholds for activation and conduction block for different fiber diameters at different locations in the DCs. Activation and block thresholds increased sharply as the fibers were placed deeper in the DCs, and decreased for larger diameter fibers. Activation thresholds were > 5 mA in all cases and up to five times higher than for conventional (~ 50 Hz) SCS. For fibers exhibiting persistent activation, the degree of synchronization of the firing activity to the KHF-SCS signal, as quantified using the vector strength, was low for a broad amplitude range, and the dissimilarity between the activities in pairs of fibers, as quantified using the spike time distance, was high and decreased for more closely positioned fibers. Conduction block thresholds were higher than 30 mA for all fiber diameters at any depth and well above the amplitudes used clinically (0.5 – 5 mA). KHF-SCS appears to activate few, large, superficial fibers, and the activated fibers fire asynchronously to the stimulation signal and to other activated fibers.

The outcomes of this work contribute to the understanding of KHF neurostimulation by establishing the importance of the tissue filtering properties on the distribution of potentials, assessing quantitatively the impact of KHF stimulation on nerve fiber excitation, and developing and validating a detailed model of a DC fiber to characterize the effects of KHF stimulation on DC axons. The results have implications for design of waveforms for efficient and effective nerve fiber stimulation in the peripheral and central nervous system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The presence of harmful algal blooms (HAB) is a growing concern in aquatic environments. Among HAB organisms, cyanobacteria are of special concern because they have been reported worldwide to cause environmental and human health problem through contamination of drinking water. Although several analytical approaches have been applied to monitoring cyanobacteria toxins, conventional methods are costly and time-consuming so that analyses take weeks for field sampling and subsequent lab analysis. Capillary electrophoresis (CE) becomes a particularly suitable analytical separation method that can couple very small samples and rapid separations to a wide range of selective and sensitive detection techniques. This paper demonstrates a method for rapid separation and identification of four microcystin variants commonly found in aquatic environments. CE coupled to UV and electrospray ionization time-of-flight mass spectrometry (ESI-TOF) procedures were developed. All four analytes were separated within 6 minutes. The ESI-TOF experiment provides accurate molecular information, which further identifies analytes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim: Excipients are used to overcome the chemical, physical and microbiological challenges posed by developing formulated medicines. Both methyl and propyl paraben are commonly used in pediatric liquid formulations. There is no data on systemic exposure to parabens in neonates. The European Study of Neonatal Exposure to Excipients project has investigated this. Results & methodology: DBS sampling was used to collect opportunistic blood samples. Parabens were extracted from the DBS and analyzed using a validated LC-MS/MS assay.

Discussion & Conclusion: The above assay was applied to analyze neonatal DBS samples. The blood concentrations of parabens in neonates confirm systemic exposure to parabens following administration of routine medicines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Otto-von-Guericke-Universität Magdeburg, Fakultät für Informatik, Dissertation, 2016