987 resultados para Average method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

As congestion management strategies begin to put more emphasis on person trips than vehicle trips, the need for vehicle occupancy data has become more critical. The traditional methods of collecting these data include the roadside windshield method and the carousel method. These methods are labor-intensive and expensive. An alternative to these traditional methods is to make use of the vehicle occupancy information in traffic accident records. This method is cost effective and may provide better spatial and temporal coverage than the traditional methods. However, this method is subject to potential biases resulting from under- and over-involvement of certain population sectors and certain types of accidents in traffic accident records. In this dissertation, three such potential biases, i.e., accident severity, driver¡¯s age, and driver¡¯s gender, were investigated and the corresponding bias factors were developed as needed. The results show that although multi-occupant vehicles are involved in higher percentages of severe accidents than are single-occupant vehicles, multi-occupant vehicles in the whole accident vehicle population were not overrepresented in the accident database. On the other hand, a significant difference was found between the distributions of the ages and genders of drivers involved in accidents and those of the general driving population. An information system that incorporates adjustments for the potential biases was developed to estimate the average vehicle occupancies (AVOs) for different types of roadways on the Florida state roadway system. A reasonableness check of the results from the system shows AVO estimates that are highly consistent with expectations. In addition, comparisons of AVOs from accident data with the field estimates show that the two data sources produce relatively consistent results. While accident records can be used to obtain the historical AVO trends and field data can be used to estimate the current AVOs, no known methods have been developed to project future AVOs. Four regression models for the purpose of predicting weekday AVOs on different levels of geographic areas and roadway types were developed as part of this dissertation. The models show that such socioeconomic factors as income, vehicle ownership, and employment have a significant impact on AVOs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Benzodiazepines are among the most prescribed compounds for anti-anxiety and are present in many toxicological screens. These drugs are also prominent in the commission of drug facilitated sexual assaults due their effects on the central nervous system. Due to their potency, a low dose of these compounds is often administered to victims; therefore, the target detection limit for these compounds in biological samples is 10 ng/mL. Currently these compounds are predominantly analyzed using immunoassay techniques; however more specific screening methods are needed. ^ The goal of this dissertation was to develop a rapid, specific screening technique for benzodiazepines in urine samples utilizing surface-enhanced Raman spectroscopy (SERS), which has previously been shown be capable of to detect trace quantities of pharmaceutical compounds in aqueous solutions. Surface enhanced Raman spectroscopy has the advantage of overcoming the low sensitivity and fluorescence effects seen with conventional Raman spectroscopy. The spectra are obtained by applying an analyte onto a SERS-active metal substrate such as colloidal metal particles. SERS signals can be further increased with the addition of aggregate solutions. These agents cause the nanoparticles to amass and form hot-spots which increase the signal intensity. ^ In this work, the colloidal particles are spherical gold nanoparticles in aqueous solution with an average size of approximately 30 nm. The optimum aggregating agent for the detection of benzodiazepines was determined to be 16.7 mM MgCl2, providing the highest signal intensities at the lowest drug concentrations with limits of detection between 0.5 and 127 ng/mL. A supported liquid extraction technique was utilized as a rapid clean extraction for benzodiazepines from urine at a pH of 5.0, allowing for clean extraction with limits of detection between 6 and 640 ng/mL. It was shown that at this pH other drugs that are prevalent in urine samples can be removed providing the selective detection of the benzodiazepine of interest. ^ This technique has been shown to provide rapid (less than twenty minutes), sensitive, and specific detection of benzodiazepines at low concentrations in urine. It provides the forensic community with a sensitive and specific screening technique for the detection of benzodiazepines in drug facilitated assault cases.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Benzodiazepines are among the most prescribed compounds for anti-anxiety and are present in many toxicological screens. These drugs are also prominent in the commission of drug facilitated sexual assaults due their effects on the central nervous system. Due to their potency, a low dose of these compounds is often administered to victims; therefore, the target detection limit for these compounds in biological samples is 10 ng/mL. Currently these compounds are predominantly analyzed using immunoassay techniques; however more specific screening methods are needed. The goal of this dissertation was to develop a rapid, specific screening technique for benzodiazepines in urine samples utilizing surface-enhanced Raman spectroscopy (SERS), which has previously been shown be capable of to detect trace quantities of pharmaceutical compounds in aqueous solutions. Surface enhanced Raman spectroscopy has the advantage of overcoming the low sensitivity and fluorescence effects seen with conventional Raman spectroscopy. The spectra are obtained by applying an analyte onto a SERS-active metal substrate such as colloidal metal particles. SERS signals can be further increased with the addition of aggregate solutions. These agents cause the nanoparticles to amass and form hot-spots which increase the signal intensity. In this work, the colloidal particles are spherical gold nanoparticles in aqueous solution with an average size of approximately 30 nm. The optimum aggregating agent for the detection of benzodiazepines was determined to be 16.7 mM MgCl2, providing the highest signal intensities at the lowest drug concentrations with limits of detection between 0.5 and 127 ng/mL. A supported liquid extraction technique was utilized as a rapid clean extraction for benzodiazepines from urine at a pH of 5.0, allowing for clean extraction with limits of detection between 6 and 640 ng/mL. It was shown that at this pH other drugs that are prevalent in urine samples can be removed providing the selective detection of the benzodiazepine of interest. This technique has been shown to provide rapid (less than twenty minutes), sensitive, and specific detection of benzodiazepines at low concentrations in urine. It provides the forensic community with a sensitive and specific screening technique for the detection of benzodiazepines in drug facilitated assault cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A modified UNIFAC–VISCO group contribution method was developed for the correlation and prediction of viscosity of ionic liquids as a function of temperature at 0.1 MPa. In this original approach, cations and anions were regarded as peculiar molecular groups. The significance of this approach comes from the ability to calculate the viscosity of mixtures of ionic liquids as well as pure ionic liquids. Binary interaction parameters for selected cations and anions were determined by fitting the experimental viscosity data available in literature for selected ionic liquids. The temperature dependence on the viscosity of the cations and anions were fitted to a Vogel–Fulcher–Tamman behavior. Binary interaction parameters and VFT type fitting parameters were then used to determine the viscosity of pure and mixtures of ionic liquids with different combinations of cations and anions to ensure the validity of the prediction method. Consequently, the viscosities of binary ionic liquid mixtures were then calculated by using this prediction method. In this work, the viscosity data of pure ionic liquids and of binary mixtures of ionic liquids are successfully calculated from 293.15 K to 363.15 K at 0.1 MPa. All calculated viscosity data showed excellent agreement with experimental data with a relative absolute average deviation lower than 1.7%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The viscosity of ionic liquids (ILs) has been modeled as a function of temperature and at atmospheric pressure using a new method based on the UNIFAC–VISCO method. This model extends the calculations previously reported by our group (see Zhao et al. J. Chem. Eng. Data 2016, 61, 2160–2169) which used 154 experimental viscosity data points of 25 ionic liquids for regression of a set of binary interaction parameters and ion Vogel–Fulcher–Tammann (VFT) parameters. Discrepancies in the experimental data of the same IL affect the quality of the correlation and thus the development of the predictive method. In this work, mathematical gnostics was used to analyze the experimental data from different sources and recommend one set of reliable data for each IL. These recommended data (totally 819 data points) for 70 ILs were correlated using this model to obtain an extended set of binary interaction parameters and ion VFT parameters, with a regression accuracy of 1.4%. In addition, 966 experimental viscosity data points for 11 binary mixtures of ILs were collected from literature to establish this model. All the binary data consist of 128 training data points used for the optimization of binary interaction parameters and 838 test data points used for the comparison of the pure evaluated values. The relative average absolute deviation (RAAD) for training and test is 2.9% and 3.9%, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new method for the evaluation of the efficiency of parabolic trough collectors, called Rapid Test Method, is investigated at the Solar Institut Jülich. The basic concept is to carry out measurements under stagnation conditions. This allows a fast and inexpensive process due to the fact that no working fluid is required. With this approach, the temperature reached by the inner wall of the receiver is assumed to be the stagnation temperature and hence the average temperature inside the collector. This leads to a systematic error which can be rectified through the introduction of a correction factor. A model of the collector is simulated with COMSOL Multipyisics to study the size of the correction factor depending on collector geometry and working conditions. The resulting values are compared with experimental data obtained at a test rig at the Solar Institut Jülich. These results do not match with the simulated ones. Consequentially, it was not pos-sible to verify the model. The reliability of both the model with COMSOL Multiphysics and of the measurements are analysed. The influence of the correction factor on the rapid test method is also studied, as well as the possibility of neglecting it by measuring the receiver’s inner wall temperature where it receives the least amount of solar rays. The last two chapters analyse the specific heat capacity as a function of pressure and tem-perature and present some considerations about the uncertainties on the efficiency curve obtained with the Rapid Test Method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present paper describes a novel, simple and reliable differential pulse voltammetric method for determining amitriptyline (AMT) in pharmaceutical formulations. It has been described for many authors that this antidepressant is electrochemically inactive at carbon electrodes. However, the procedure proposed herein consisted in electrochemically oxidizing AMT at an unmodified carbon nanotube paste electrode in the presence of 0.1 mol L(-1) sulfuric acid used as electrolyte. At such concentration, the acid facilitated the AMT electroxidation through one-electron transfer at 1.33 V vs. Ag/AgCl, as observed by the augmentation of peak current. Concerning optimized conditions (modulation time 5 ms, scan rate 90 mV s(-1), and pulse amplitude 120 mV) a linear calibration curve was constructed in the range of 0.0-30.0 μmol L(-1), with a correlation coefficient of 0.9991 and a limit of detection of 1.61 μmol L(-1). The procedure was successfully validated for intra- and inter-day precision and accuracy. Moreover, its feasibility was assessed through analysis of commercial pharmaceutical formulations and it has been compared to the UV-vis spectrophotometric method used as standard analytical technique recommended by the Brazilian Pharmacopoeia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Substantial complexity has been introduced into treatment regimens for patients with human immunodeficiency virus (HIV) infection. Many drug-related problems (DRPs) are detected in these patients, such as low adherence, therapeutic inefficacy, and safety issues. We evaluated the impact of pharmacist interventions on CD4+ T-lymphocyte count, HIV viral load, and DRPs in patients with HIV infection. In this 18-month prospective controlled study, 90 outpatients were selected by convenience sampling from the Hospital Dia-University of Campinas Teaching Hospital (Brazil). Forty-five patients comprised the pharmacist intervention group and 45 the control group; all patients had HIV infection with or without acquired immunodeficiency syndrome. Pharmaceutical appointments were conducted based on the Pharmacotherapy Workup method, although DRPs and pharmacist intervention classifications were modified for applicability to institutional service limitations and research requirements. Pharmacist interventions were performed immediately after detection of DRPs. The main outcome measures were DRPs, CD4+ T-lymphocyte count, and HIV viral load. After pharmacist intervention, DRPs decreased from 5.2 (95% confidence interval [CI] =4.1-6.2) to 4.2 (95% CI =3.3-5.1) per patient (P=0.043). A total of 122 pharmacist interventions were proposed, with an average of 2.7 interventions per patient. All the pharmacist interventions were accepted by physicians, and among patients, the interventions were well accepted during the appointments, but compliance with the interventions was not measured. A statistically significant increase in CD4+ T-lymphocyte count in the intervention group was found (260.7 cells/mm(3) [95% CI =175.8-345.6] to 312.0 cells/mm(3) [95% CI =23.5-40.6], P=0.015), which was not observed in the control group. There was no statistical difference between the groups regarding HIV viral load. This study suggests that pharmacist interventions in patients with HIV infection can cause an increase in CD4+ T-lymphocyte counts and a decrease in DRPs, demonstrating the importance of an optimal pharmaceutical care plan.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present work compared the local injection of mononuclear cells to the spinal cord lateral funiculus with the alternative approach of local delivery with fibrin sealant after ventral root avulsion (VRA) and reimplantation. For that, female adult Lewis rats were divided into the following groups: avulsion only, reimplantation with fibrin sealant; root repair with fibrin sealant associated with mononuclear cells; and repair with fibrin sealant and injected mononuclear cells. Cell therapy resulted in greater survival of spinal motoneurons up to four weeks post-surgery, especially when mononuclear cells were added to the fibrin glue. Injection of mononuclear cells to the lateral funiculus yield similar results to the reimplantation alone. Additionally, mononuclear cells added to the fibrin glue increased neurotrophic factor gene transcript levels in the spinal cord ventral horn. Regarding the motor recovery, evaluated by the functional peroneal index, as well as the paw print pressure, cell treated rats performed equally well as compared to reimplanted only animals, and significantly better than the avulsion only subjects. The results herein demonstrate that mononuclear cells therapy is neuroprotective by increasing levels of brain derived neurotrophic factor (BDNF) and glial derived neurotrophic factor (GDNF). Moreover, the use of fibrin sealant mononuclear cells delivery approach gave the best and more long lasting results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Flavanones (hesperidin, naringenin, naringin, and poncirin) in industrial, hand-squeezed orange juices and from fresh-in-squeeze machines orange juices were determined by HPLC/DAD analysis using a previously described liquid-liquid extraction method. Method validation including the accuracy was performed by using recovery tests. Samples (36) collected from different Brazilian locations and brands were analyzed. Concentrations were determined using an external standard curve. The limits of detection (LOD) and the limits of quantification (LOQ) calculated were 0.0037, 1.87, 0.0147, and 0.0066 mg 100 g(-1) and 0.0089, 7.84, 0.0302, and 0.0200 mg 100 g(-1) for naringin, hesperidin, poncirin, and naringenin, respectively. The results demonstrated that hesperidin was present at the highest concentration levels, especially in the industrial orange juices. Its average content and concentration range were 69.85 and 18.80-139.00 mg 100 g(-1). The other flavanones showed the lowest concentration levels. The average contents and concentration ranges found were 0.019, 0.01-0.30, and 0.12 and 0.1-0.17, 0.13, and 0.01-0.36 mg 100 g(-1), respectively. The results were also evaluated using the principal component analysis (PCA) multivariate analysis technique which showed that poncirin, naringenin, and naringin were the principal elements that contributed to the variability in the sample concentrations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bariatric surgery is considered an effective method for sustained weight loss, but may cause various nutritional complications. The aim of this study was to evaluate the nutritional status of minerals and vitamins, food consumption, and to monitor physiologic parameters in patients with obesity before and 6 months after Roux-en-Y gastric bypass surgery (RYGB). Thirty-six patients who had undergone RYGB were prospectively evaluated before and 6 months after surgery. At each phase their weight, height, body mass index (BMI), Electro Sensor Complex (ES Complex) data, food consumption, and total protein serum levels, albumin, prealbumin, parathyroid hormone (PTH), zinc (Zn), B12 vitamin (VitB12), iron (Fe), ferritin, copper (Cu), ionic calcium (CaI), magnesium (Mg), and folic acid were assessed. The mean weight loss from baseline to 6 months after surgery was 35.34±4.82%. Markers of autonomic nervous system balance (P<.01), stiffness index (P<.01), standard deviation of normal-to-normal R-R intervals (SDNN) (P<.01), and insulin resistance (P<.001) were also improved. With regard to the micronutrients measured, 34 patients demonstrated some kind of deficiency. There was a high percentage of Zn deficiency in both pre- (55.55%) and postoperative (61.11%) patients, and 33.33% of the patients were deficient in prealbumin postoperatively. The protein intake after 6 months of surgery was below the recommended intake (<70 g/d) for 88.88% of the patients. Laboratory analyses demonstrated an average decrease in total protein (P<.05), prealbumin (P = .002), and PTH (P = .008) between pre- and postsurgery, and a decrease in the percentage of deficiencies for Mg (P<.05), CaI (P<.05), and Fe (P = .021). Despite improvements in the autonomic nervous system balance, stiffness index markers and insulin resistance, we found a high prevalence of hypozincemia at 6 months post-RYGB. Furthermore, protein supplements were needed to maintain an adequate protein intake up to 6 months postsurgery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is well known that long term use of shampoo causes damage to human hair. Although the Lowry method has been widely used to quantify hair damage, it is unsuitable to determine this in the presence of some surfactants and there is no other method proposed in literature. In this work, a different method is used to investigate and compare the hair damage induced by four types of surfactants (including three commercial-grade surfactants) and water. Hair samples were immersed in aqueous solution of surfactants under conditions that resemble a shower (38 °C, constant shaking). These solutions become colored with time of contact with hair and its UV-vis spectra were recorded. For comparison, the amount of extracted proteins from hair by sodium dodecyl sulfate (SDS) and by water were estimated by the Lowry method. Additionally, non-pigmented vs. pigmented hair and also sepia melanin were used to understand the washing solution color and their spectra. The results presented herein show that hair degradation is mostly caused by the extraction of proteins, cuticle fragments and melanin granules from hair fiber. It was found that the intensity of solution color varies with the charge density of the surfactants. Furthermore, the intensity of solution color can be correlated to the amount of proteins quantified by the Lowry method as well as to the degree of hair damage. UV-vis spectrum of hair washing solutions is a simple and straightforward method to quantify and compare hair damages induced by different commercial surfactants.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The search for an Alzheimer's disease (AD) biomarker is one of the most relevant contemporary research topics due to the high prevalence and social costs of the disease. Functional connectivity (FC) of the default mode network (DMN) is a plausible candidate for such a biomarker. We evaluated 22 patients with mild AD and 26 age- and gender-matched healthy controls. All subjects underwent resting functional magnetic resonance imaging (fMRI) in a 3.0 T scanner. To identify the DMN, seed-based FC of the posterior cingulate was calculated. We also measured the sensitivity/specificity of the method, and verified a correlation with cognitive performance. We found a significant difference between patients with mild AD and controls in average z-scores: DMN, whole cortical positive (WCP) and absolute values. DMN individual values showed a sensitivity of 77.3% and specificity of 70%. DMN and WCP values were correlated to global cognition and episodic memory performance. We showed that individual measures of DMN connectivity could be considered a promising method to differentiate AD, even at an early phase, from normal aging. Further studies with larger numbers of participants, as well as validation of normal values, are needed for more definitive conclusions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, the transmission-line modeling (TLM) applied to bio-thermal problems was improved by incorporating several novel computational techniques, which include application of graded meshes which resulted in 9 times faster in computational time and uses only a fraction (16%) of the computational resources used by regular meshes in analyzing heat flow through heterogeneous media. Graded meshes, unlike regular meshes, allow heat sources to be modeled in all segments of the mesh. A new boundary condition that considers thermal properties and thus resulting in a more realistic modeling of complex problems is introduced. Also, a new way of calculating an error parameter is introduced. The calculated temperatures between nodes were compared against the results obtained from the literature and agreed within less than 1% difference. It is reasonable, therefore, to conclude that the improved TLM model described herein has great potential in heat transfer of biological systems.