958 resultados para automated lexical analysis
Resumo:
Temperature, pressure, gas stoichiometry, and residence time were varied to control the yield and product distribution of the palladium-catalyzed aminocarbonylation of aromatic bromides in both a silicon microreactor and a packed-bed tubular reactor. Automation of the system set points and product sampling enabled facile and repeatable reaction analysis with minimal operator supervision. It was observed that the reaction was divided into two temperature regimes. An automated system was used to screen steady-state conditions for offline analysis by gas chromatography to fit a reaction rate model. Additionally, a transient temperature ramp method utilizing online infrared analysis was used, leading to more rapid determination of the reaction activation energy of the lower temperature regimes. The entire reaction spanning both regimes was modeled in good agreement with the experimental data.
Resumo:
This paper describes the development and evaluation of a sequential injection method to automate the determination of methyl parathion by square wave adsorptive cathodic stripping voltammetry exploiting the concept of monosegmented flow analysis to perform in-line sample conditioning and standard addition. Accumulation and stripping steps are made in the sample medium conditioned with 40 mmol L-1 Britton-Robinson buffer (pH 10) in 0.25 mol L-1 NaNO3. The homogenized mixture is injected at a flow rate of 10 mu Ls(-1) toward the flow cell, which is adapted to the capillary of a hanging drop mercury electrode. After a suitable deposition time, the flow is stopped and the potential is scanned from -0.3 to -1.0 V versus Ag/AgCl at frequency of 250 Hz and pulse height of 25 mV The linear dynamic range is observed for methyl parathion concentrations between 0.010 and 0.50 mgL(-1), with detection and quantification limits of 2 and 7 mu gL(-1), respectively. The sampling throughput is 25 h(-1) if the in line standard addition and sample conditioning protocols are followed, but this frequency can be increased up to 61 h(-1) if the sample is conditioned off-line and quantified using an external calibration curve. The method was applied for determination of methyl parathion in spiked water samples and the accuracy was evaluated either by comparison to high performance liquid chromatography with UV detection, or by the recovery percentages. Although no evidences of statistically significant differences were observed between the expected and obtained concentrations, because of the susceptibility of the method to interference by other pesticides (e.g., parathion, dichlorvos) and natural organic matter (e.g., fulvic and humic acids), isolation of the analyte may be required when more complex sample matrices are encountered. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
This paper describes the automation of a fully electrochemical system for preconcentration, cleanup, separation and detection, comprising the hyphenation of a thin layer electrochemical flow cell with CE coupled with contactless conductivity detection (CE-C(4)D). Traces of heavy metal ions were extracted from the pulsed-flowing sample and accumulated on a glassy carbon working electrode by electroreduction for some minutes. Anodic stripping of the accumulated metals was synchronized with hydrodynamic injection into the capillary. The effect of the angle of the slant polished tip of the CE capillary and its orientation against the working electrode in the electrochemical preconcentration (EPC) flow cell and of the accumulation time were studied, aiming at maximum CE-C(4)D signal enhancement. After 6 min of EPC, enhancement factors close to 50 times were obtained for thallium, lead, cadmium and copper ions, and about 16 for zinc ions. Limits of detection below 25 nmol/L were estimated for all target analytes but zinc. A second separation dimension was added to the CE separation capabilities by staircase scanning of the potentiostatic deposition and/or stripping potentials of metal ions, as implemented with the EPC-CE-C(4)D flow system. A matrix exchange between the deposition and stripping steps, highly valuable for sample cleanup, can be straightforwardly programmed with the multi-pumping flow management system. The automated simultaneous determination of the traces of five accumulable heavy metals together with four non-accumulated alkaline and alkaline earth metals in a single run was demonstrated, to highlight the potentiality of the system.
Resumo:
This paper analyzes some forms of linguistic manipulation in Japanese in newspapers when reporting on North Korea and its nuclear tests. The focus lies on lexical ambiguity in headlines and journalist’s voices in the body of the articles, that results in manipulation of the minds of the readers. The study is based on a corpus of nine articles from two of Japan’s largest newspapers Yomiuri Online and Asahi Shimbun Digital. The linguistic phenomenon that contribute to create manipulation are divided into Short Term Memory impact or Long Term Memory impact and examples will be discussed under each of the categories.The main results of the study are that headlines in Japanese newspapers do not make use of an ambiguous, double grounded structure. However, the articles are filled with explicit and implied attitudes as well as attributed material from people of a high social status, which suggests that manipulation of the long term memory is a tool used in Japanese media.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The analysis of alcoholic beverages for the important carcinogenic contaminant ethyl carbamate is very time-consuming and expensive. Due to possible matrix interferences, sample cleanup using diatomaceous earth (Extrelut) column is required prior to gas chromatographic and mass spectrometric measurement. A limiting step in this process is the rotary evaporation of the eluate containing the analyte in organic solvents, which is currently conducted manually and requires approximately 20-30 min per sample. This paper introduces the use of a parallel evaporation device for ethyl carbamate analysis, which allows for the simultaneous evaporation of 12 samples to a specified residual volume without manual intervention. A more efficient and, less expensive analysis is therefore possible. The method validation showed no differences between the fully-automated parallel evaporation and the manual operation. The applicability was proven by analyzing authentic spirit samples from Germany, Canada and Brazil. It is interesting to note that Brazilian cachacas had a relatively high incidence for ethyl carbamate contamination (55% of all samples were above 0.15 mg/l), which may be of public health relevance and requires further evaluation.
Resumo:
The present study evaluated by cone-beam computed tomography (CBCT) the apical canal transportation and centralizing ability of different automated systems after root canal preparation. The mesiobuccal canals of maxillary first molars (n=10 per group) were prepared with: GI - reciprocating system with K-Flexofile; GII - reciprocating system with NiTiFlex files; GIII - rotary system with K3 instruments; GIV - rotary system with RaCe instruments. CBCT scans were taken before and after biomechanical preparation up to a #40.02 diameter. Canal transportation was determined by measuring the smallest distance between the inner canal walls and the mesial and distal sides of the root. The centralization ability corresponded to the difference between the measurements from transportation evaluation, using the linear voxel to voxel method of analysis. The mean transportation was 0.06 ± 0.14 mm, with a tendency to deviate to the mesial side of the root (n=22), with no statistically significant difference among the groups (p=0.4153). The mean centralization index was 0.15 ± 0.65 also without statistically significant difference among the groups (p=0.0881). It may be concluded that apical canal transportation and centralization ability were not influenced by the type of mechanical movement and instruments used.
Resumo:
The impact of peritoneal dialysis modality on patient survival and peritonitis rates is not fully understood, and no large-scale randomized clinical trial (RCT) is available. In the absence of a RCT, the use of an advanced matching procedure to reduce selection bias in large cohort studies may be the best approach. The aim of this study is to compare automated peritoneal dialysis (APD) and continuous ambulatory peritoneal dialysis (CAPD) according to peritonitis risk, technique failure and patient survival in a large nation-wide PD cohort. This is a prospective cohort study that included all incident PD patients with at least 90 days of PD recruited in the BRAZPD study. All patients who were treated exclusively with either APD or CAPD were matched for 15 different covariates using a propensity score calculated with the nearest neighbor method. Clinical outcomes analyzed were overall mortality, technique failure and time to first peritonitis. For all analysis we also adjusted the curves for the presence of competing risks with the Fine and Gray analysis. After the matching procedure, 2,890 patients were included in the analysis (1,445 in each group). Baseline characteristics were similar for all covariates including: age, diabetes, BMI, Center-experience, coronary artery disease, cancer, literacy, hypertension, race, previous HD, gender, pre-dialysis care, family income, peripheral artery disease and year of starting PD. Mortality rate was higher in CAPD patients (SHR1.44 CI95%1.21-1.71) compared to APD, but no difference was observed for technique failure (SHR0.83 CI95%0.69-1.02) nor for time till the first peritonitis episode (SHR0.96 CI95%0.93-1.11). In the first large PD cohort study with groups balanced for several covariates using propensity score matching, PD modality was not associated with differences in neither time to first peritonitis nor in technique failure. Nevertheless, patient survival was significantly better in APD patients.
Resumo:
A sensitive, selective, and reproducible in-tube solid-phase microextraction and liquid chromatographic (in-tube SPME/LC-UV) method for determination of lidocaine and its metabolite monoethylglycinexylidide (MEGX) in human plasma has been developed, validated, and further applied to pharmacokinetic study in pregnant women with gestational diabetes mellitus (GDM) subjected to epidural anesthesia. Important factors in the optimization of in-tube SPME performance are discussed, including the draw/eject sample volume, draw/eject cycle number, draw/eject flow rate, sample pH, and influence of plasma proteins. The limits of quantification of the in-tube SPME/LC method were 50 ng/mL for both metabolite and lidocaine. The interday and intraday precision had coefficients of variation lower than 8%, and accuracy ranged from 95 to 117%. The response of the in-tube SPME/LC method for analytes was linear over a dynamic range from 50 to 5000 ng/mL, with correlation coefficients higher than 0.9976. The developed in-tube SPME/LC method was successfully used to analyze lidocaine and its metabolite in plasma samples from pregnant women with GDM subjected to epidural anesthesia for pharmacokinetic study.
Resumo:
Recent experimental evidence has suggested a neuromodulatory deficit in Alzheimer's disease (AD). In this paper, we present a new electroencephalogram (EEG) based metric to quantitatively characterize neuromodulatory activity. More specifically, the short-term EEG amplitude modulation rate-of-change (i.e., modulation frequency) is computed for five EEG subband signals. To test the performance of the proposed metric, a classification task was performed on a database of 32 participants partitioned into three groups of approximately equal size: healthy controls, patients diagnosed with mild AD, and those with moderate-to-severe AD. To gauge the benefits of the proposed metric, performance results were compared with those obtained using EEG spectral peak parameters which were recently shown to outperform other conventional EEG measures. Using a simple feature selection algorithm based on area-under-the-curve maximization and a support vector machine classifier, the proposed parameters resulted in accuracy gains, relative to spectral peak parameters, of 21.3% when discriminating between the three groups and by 50% when mild and moderate-to-severe groups were merged into one. The preliminary findings reported herein provide promising insights that automated tools may be developed to assist physicians in very early diagnosis of AD as well as provide researchers with a tool to automatically characterize cross-frequency interactions and their changes with disease.
Resumo:
The development of new procedures for quickly obtaining accurate information on the physiological potential of seed lots is essential for developing quality control programs for the seed industry. In this study, the effectiveness of an automated system of seedling image analysis (Seed Vigor Imaging System - SVIS) in determining the physiological potential of sun hemp seeds and its relationship with electrical conductivity tests, were evaluated. SVIS evaluations were performed three and four days after sowing and data on the vigor index and the length and uniformity of seedling growth were collected. The electrical conductivity test was made on 50 seed replicates placed in containers with 75 mL of deionised water at 25 ºC and readings were taken after 1, 2, 4, 8 and 16 hours of imbibition. Electrical conductivity measurements at 4 or 8 hours and the use of the SVIS on 3-day old seedlings can effectively detect differences in vigor between different sun hemp seed lots.
Resumo:
A first phase of the research activity has been related to the study of the state of art of the infrastructures for cycling, bicycle use and methods for evaluation. In this part, the candidate has studied the "bicycle system" in countries with high bicycle use and in particular in the Netherlands. Has been carried out an evaluation of the questionnaires of the survey conducted within the European project BICY on mobility in general in 13 cities of the participating countries. The questionnaire was designed, tested and implemented, and was later validated by a test in Bologna. The results were corrected with information on demographic situation and compared with official data. The cycling infrastructure analysis was conducted on the basis of information from the OpenStreetMap database. The activity consisted in programming algorithms in Python that allow to extract data from the database infrastructure for a region, to sort and filter cycling infrastructure calculating some attributes, such as the length of the arcs paths. The results obtained were compared with official data where available. The structure of the thesis is as follows: 1. Introduction: description of the state of cycling in several advanced countries, description of methods of analysis and their importance to implement appropriate policies for cycling. Supply and demand of bicycle infrastructures. 2. Survey on mobility: it gives details of the investigation developed and the method of evaluation. The results obtained are presented and compared with official data. 3. Analysis cycling infrastructure based on information from the database of OpenStreetMap: describes the methods and algorithms developed during the PhD. The results obtained by the algorithms are compared with official data. 4. Discussion: The above results are discussed and compared. In particular the cycle demand is compared with the length of cycle networks within a city. 5. Conclusions
Resumo:
The main goal of this thesis is to facilitate the process of industrial automated systems development applying formal methods to ensure the reliability of systems. A new formulation of distributed diagnosability problem in terms of Discrete Event Systems theory and automata framework is presented, which is then used to enforce the desired property of the system, rather then just verifying it. This approach tackles the state explosion problem with modeling patterns and new algorithms, aimed for verification of diagnosability property in the context of the distributed diagnosability problem. The concepts are validated with a newly developed software tool.
Resumo:
An extrusion die is used to continuously produce parts with a constant cross section; such as sheets, pipes, tire components and more complex shapes such as window seals. The die is fed by a screw extruder when polymers are used. The extruder melts, mixes and pressures the material by the rotation of either a single or double screw. The polymer can then be continuously forced through the die producing a long part in the shape of the die outlet. The extruded section is then cut to the desired length. Generally, the primary target of a well designed die is to produce a uniform outlet velocity without excessively raising the pressure required to extrude the polymer through the die. Other properties such as temperature uniformity and residence time are also important but are not directly considered in this work. Designing dies for optimal outlet velocity variation using simple analytical equations are feasible for basic die geometries or simple channels. Due to the complexity of die geometry and of polymer material properties design of complex dies by analytical methods is difficult. For complex dies iterative methods must be used to optimize dies. An automated iterative method is desired for die optimization. To automate the design and optimization of an extrusion die two issues must be dealt with. The first is how to generate a new mesh for each iteration. In this work, this is approached by modifying a Parasolid file that describes a CAD part. This file is then used in a commercial meshing software. Skewing the initial mesh to produce a new geometry was also employed as a second option. The second issue is an optimization problem with the presence of noise stemming from variations in the mesh and cumulative truncation errors. In this work a simplex method and a modified trust region method were employed for automated optimization of die geometries. For the trust region a discreet derivative and a BFGS Hessian approximation were used. To deal with the noise in the function the trust region method was modified to automatically adjust the discreet derivative step size and the trust region based on changes in noise and function contour. Generally uniformity of velocity at exit of the extrusion die can be improved by increasing resistance across the die but this is limited by the pressure capabilities of the extruder. In optimization, a penalty factor that increases exponentially from the pressure limit is applied. This penalty can be applied in two different ways; the first only to the designs which exceed the pressure limit, the second to both designs above and below the pressure limit. Both of these methods were tested and compared in this work.