893 resultados para Serial Extraction
Resumo:
Automatic creation of polarity lexicons is a crucial issue to be solved in order to reduce time andefforts in the first steps of Sentiment Analysis. In this paper we present a methodology based onlinguistic cues that allows us to automatically discover, extract and label subjective adjectivesthat should be collected in a domain-based polarity lexicon. For this purpose, we designed abootstrapping algorithm that, from a small set of seed polar adjectives, is capable to iterativelyidentify, extract and annotate positive and negative adjectives. Additionally, the methodautomatically creates lists of highly subjective elements that change their prior polarity evenwithin the same domain. The algorithm proposed reached a precision of 97.5% for positiveadjectives and 71.4% for negative ones in the semantic orientation identification task.
Resumo:
This work briefly analyses the difficulties to adopt the Semantic Web, and in particular proposes systems to know the present level of migration to the different technologies that make up the Semantic Web. It focuses on the presentation and description of two tools, DigiDocSpider and DigiDocMetaEdit, designed with the aim of verifYing, evaluating, and promoting its implementation.
Resumo:
Solid-phase extraction (SPE) in tandem with dispersive liquid-liquid microextraction (DLLME) has been developed for the determination of mononitrotoluenes (MNTs) in several aquatic samples using gas chromatography-flame ionization (GC-FID) detection system. In the hyphenated SPE-DLLME, initially MNTs were extracted from a large volume of aqueous samples (100 mL) into a 500-mg octadecyl silane (C(18) ) sorbent. After the elution of analytes from the sorbent with acetonitrile, the obtained solution was put under the DLLME procedure, so that the extra preconcentration factors could be achieved. The parameters influencing the extraction efficiency such as breakthrough volume, type and volume of the elution solvent (disperser solvent) and extracting solvent, as well as the salt addition, were studied and optimized. The calibration curves were linear in the range of 0.5-500 μg/L and the limit of detection for all analytes was found to be 0.2 μg/L. The relative standard deviations (for 0.75 μg/L of MNTs) without internal standard varied from 2.0 to 6.4% (n=5). The relative recoveries of the well, river and sea water samples, spiked at the concentration level of 0.75 μg/L of the analytes, were in the range of 85-118%.
Resumo:
ABSTRACT: In sexual assault cases, autosomal DNA analysis of gynecological swabs is a challenge, as the presence of a large quantity of female material may prevent the detection of the male DNA. A solution to this problem is differential DNA extraction, but as there are different protocols, it was decided to test their efficiency on simulated casework samples. Four difficult samples were sent to the nine Swiss laboratories active in the forensic genetics. They used their routine protocols to separate the epithelial cell fraction, enriched with the non-sperm DNA, from the sperm fraction. DNA extracts were then sent to the organizing laboratory for analysis. Estimates of male to female DNA ratio without differential DNA extraction ranged from 1:38 to 1:339, depending on the semen used to prepare the samples. After differential DNA extraction, most of the ratios ranged from 1:12 to 9:1, allowing the detection of the male DNA. Compared to direct DNA extraction, cell separation resulted in losses of 94-98% of the male DNA. As expected, more male DNA was generally present in the sperm than in the epithelial cell fraction. However, for about 30% of the samples, the reverse trend was observed. The recovery of male and female DNA was highly variable depending on the laboratories. Experimental design similar to the one used in this study may help for local protocol testing and improvement.
Resumo:
Monetary policy is conducted in an environment of uncertainty. This paper sets upa model where the central bank uses real-time data from the bond market togetherwith standard macroeconomic indicators to estimate the current state of theeconomy more efficiently, while taking into account that its own actions influencewhat it observes. The timeliness of bond market data allows for quicker responsesof monetary policy to disturbances compared to the case when the central bankhas to rely solely on collected aggregate data. The information content of theterm structure creates a link between the bond market and the macroeconomythat is novel to the literature. To quantify the importance of the bond market asa source of information, the model is estimated on data for the United Statesand Australia using Bayesian methods. The empirical exercise suggests that thereis some information in the US term structure that helps the Federal Reserve toidentify shocks to the economy on a timely basis. Australian bond prices seemto be less informative than their US counterparts, perhaps because Australia is arelatively small and open economy.
Resumo:
We report results from a randomized policy experiment designed to test whether increasedaudit risk deters rent extraction in local public procurement and service delivery in Brazil. Ourestimates suggest that temporarily increasing annual audit risk by about 20 percentage pointsreduced the proportion of irregular local procurement processes by about 17 percentage points.This reduction was driven entirely by irregularities involving mismanagement or corruption. Incontrast, we find no evidence that increased audit risk affected the quality of publicly providedpreventive and primary health care services -measured based on user satisfaction surveys- orcompliance with national regulations of the conditional cash transfer program "Bolsa Família".
Resumo:
We estimate the effect of state judiciary presence on rent extraction in Brazilian local governments.We measure rents as irregularities related to waste or corruption uncovered by auditors.Our unique dataset at the level of individual inspections allows us to separately examine extensiveand intensive margins of rent extraction. The identification strategy is based on an institutionalrule of state judiciary branches according to which prosecutors and judges tend to be assigned tothe most populous among contiguous counties forming a judiciary district. Our research designexploits this rule by comparing counties that are largest in their district to counties with identicalpopulation size from other districts in the same state, where they are not the most populous. IVestimates suggest that state judiciary presence reduces the share of inspections with irregularitiesrelated to waste or corruption by about 10 percent or 0.3 standard deviations. In contrast, we findno effect on the intensive margin of rent extraction. Finally, our estimates suggest that judicialpresence reduces rent extraction only for first-term mayors.
Resumo:
Philip II of Spain accumulated debts equivalent to 60% of GDP. He also defaulted four times onhis short-term loans, thus becoming the first serial defaulter in history. Contrary to a commonview in the literature, we show that lending to the king was profitable even under worst-casescenario assumptions. Lenders maintained long-term relationships with the crown. Lossessustained during defaults were more than compensated by profits in normal times. Defaultswere not catastrophic events. In effect, short-term lending acted as an insurance mechanism,allowing the king to reduce his payments in harsh times in exchange for paying a premium intranquil periods. © 2010 Elsevier Inc. All rights reserved.
Resumo:
Evaluating leaf litter beetle data sampled by Winkler extraction from Atlantic forest sites in southern Brazil. To evaluate the reliability of data obtained by Winkler extraction in Atlantic forest sites in southern Brazil, we studied litter beetle assemblages in secondary forests (5 to 55 years after abandonment) and old-growth forests at two seasonally different points in time. For all regeneration stages, species density and abundance were lower in April compared to August; but, assemblage composition of the corresponding forest stages was similar in both months. We suggest that sampling of small litter inhabiting beetles at different points in time using the Winkler technique reveals identical ecological patterns, which are more likely to be influenced by sample incompleteness than by differences in their assemblage composition. A strong relationship between litter quantity and beetle occurrences indicates the importance of this variable for the temporal species density pattern. Additionally, the sampled beetle material was compared with beetle data obtained with pitfall traps in one old-growth forest. Over 60% of the focal species captured with pitfall traps were also sampled by Winkler extraction in different forest stages. Few beetles with a body size too large to be sampled by Winkler extraction were only sampled with pitfall traps. This indicates that the local litter beetle fauna is dominated by small species. Hence, being aware of the exclusion of large beetles and beetle species occurring during the wet season, the Winkler method reveals a reliable picture of the local leaf litter beetle community.
Resumo:
El presente trabajo trata sobre el imaginario de la interioridad en el contexto de la ficción serial contemporánea. Sin establecer una ruta definida por una variable formal determinada (autor, género, corriente estilísitca, etc.) enfocaré este estudio en el recurso expresivo de la secuencia onírica, entendiendo por tal no sólo la filmación de escenas pertenecientes al ámbito del sueño sino también a la recreación del delirio, la alucinación, las visiones, etc.En un sentido amplio, dada la naturaleza del tema escogido, se hará necesaria una reflexión sobre la relación entre vigilia, sueño, tiempo e identidad, así como una reflexión sobre el modo en que la redefinición de estos conceptos desencadena paralelamente una nueva relación de los componentes narrativos con el relato mismo.
Resumo:
A quantitative model of water movement within the immediate vicinity of an individual root is developed and results of an experiment to validate the model are presented. The model is based on the assumption that the amount of water transpired by a plant in a certain period is replaced by an equal volume entering its root system during the same time. The model is based on the Darcy-Buckingham equation to calculate the soil water matric potential at any distance from a plant root as a function of parameters related to crop, soil and atmospheric conditions. The model output is compared against measurements of soil water depletion by rice roots monitored using γ-beam attenuation in a greenhouse of the Escola Superior de Agricultura "Luiz de Queiroz"/Universidade de São Paulo(ESALQ/USP) in Piracicaba, State of São Paulo, Brazil, in 1993. The experimental results are in agreement with the output from the model. Model simulations show that a single plant root is able to withdraw water from more than 0.1 m away within a few days. We therefore can assume that root distribution is a less important factor for soil water extraction efficiency.
Resumo:
Two concentration methods for fast and routine determination of caffeine (using HPLC-UV detection) in surface, and wastewater are evaluated. Both methods are based on solid-phase extraction (SPE) concentration with octadecyl silica sorbents. A common “offline” SPE procedure shows that quantitative recovery of caffeine is obtained with 2 mL of an elution mixture solvent methanol-water containing at least 60% methanol. The method detection limit is 0.1 μg L−1 when percolating 1 L samples through the cartridge. The development of an “online” SPE method based on a mini-SPE column, containing 100 mg of the same sorbent, directly connected to the HPLC system allows the method detection limit to be decreased to 10 ng L−1 with a sample volume of 100 mL. The “offline” SPE method is applied to the analysis of caffeine in wastewater samples, whereas the “on-line” method is used for analysis in natural waters from streams receiving significant water intakes from local wastewater treatment plants
Resumo:
In most pathology laboratories worldwide, formalin-fixed paraffin embedded (FFPE) samples are the only tissue specimens available for routine diagnostics. Although commercial kits for diagnostic molecular pathology testing are becoming available, most of the current diagnostic tests are laboratory-based assays. Thus, there is a need for standardized procedures in molecular pathology, starting from the extraction of nucleic acids. To evaluate the current methods for extracting nucleic acids from FFPE tissues, 13 European laboratories, participating to the European FP6 program IMPACTS (www.impactsnetwork.eu), isolated nucleic acids from four diagnostic FFPE tissues using their routine methods, followed by quality assessment. The DNA-extraction protocols ranged from homemade protocols to commercial kits. Except for one homemade protocol, the majority gave comparable results in terms of the quality of the extracted DNA measured by the ability to amplify differently sized control gene fragments by PCR. For array-applications or tests that require an accurately determined DNA-input, we recommend using silica based adsorption columns for DNA recovery. For RNA extractions, the best results were obtained using chromatography column based commercial kits, which resulted in the highest quantity and best assayable RNA. Quality testing using RT-PCR gave successful amplification of 200 bp-250 bp PCR products from most tested tissues. Modifications of the proteinase-K digestion time led to better results, even when commercial kits were applied. The results of the study emphasize the need for quality control of the nucleic acid extracts with standardised methods to prevent false negative results and to allow data comparison among different diagnostic laboratories.
Resumo:
Several features that can be extracted from digital images of the sky and that can be useful for cloud-type classification of such images are presented. Some features are statistical measurements of image texture, some are based on the Fourier transform of the image and, finally, others are computed from the image where cloudy pixels are distinguished from clear-sky pixels. The use of the most suitable features in an automatic classification algorithm is also shown and discussed. Both the features and the classifier are developed over images taken by two different camera devices, namely, a total sky imager (TSI) and a whole sky imager (WSC), which are placed in two different areas of the world (Toowoomba, Australia; and Girona, Spain, respectively). The performance of the classifier is assessed by comparing its image classification with an a priori classification carried out by visual inspection of more than 200 images from each camera. The index of agreement is 76% when five different sky conditions are considered: clear, low cumuliform clouds, stratiform clouds (overcast), cirriform clouds, and mottled clouds (altocumulus, cirrocumulus). Discussion on the future directions of this research is also presented, regarding both the use of other features and the use of other classification techniques