960 resultados para new methods
Resumo:
Currently the market requires increasingly pure oil derivatives and, with that, comes the need for new methods for obtaining those products that are more efficient and economically viable. Considering the removal of sulfur from diesel, most refineries uses catalytic hydrogenation process, the hydrodesulfurization. These processes needs high energy content and high cost of production and has low efficiency in removing sulfur at low concentrations (below 500 ppm). The adsorption presents itself as an efficient and economically viable alternative in relation to the techniques currently used. With that, the main purpose of this work is to develop and optimize the obtaining of new adsorbents based on diatomite, modified with two non ionic surfactants microemulsions, adding efficiency to the material, to its application on removal of sulfur present in commercial diesel. Analyses were undertaken of scanning electron microscopy (SEM), x-ray diffraction (XRD), x-ray fluorescence (XRF), thermogravimetry (TG) and N2 adsorption (BET) for characterization of new materials obtained. The variables used for diatomite modification were: microemulsion points for each surfactant (RNX 95 and UNTL 90), microemulsion aqueous phase through the use or non-use of salts (CaCl2 and BaCl2), the contact time during the modification and the contact form. The study of adsorption capacity of materials obtained was performed using a statistical modeling to evaluate the influence of salt concentration in the aqueous phase (20 ppm to 1500 ppm), finite bath temperature (25 to 60° C) and the concentration of sulphur in diesel. It was observed that the temperature and the concentration of sulphur (300 to 1100 ppm) were the most significant parameters, in which increasing their values increase the ability of modified clay to adsorb the sulphur in diesel fuel. Adsorption capacity increased from 0.43 to mg/g 1.34 mg/g with microemulsion point optimization and with the addition of salts.
Resumo:
This research seeks to reflect on the dynamics of television reception, studying the Brazilian TV miniseries Hoje é Dia de Maria, produced by Globo Television Network, and aims to generally promote inferences in the process of image reading, mainly for aesthetic reading in school context, aiming at the formation of visual proficient readers. The research was conducted with students from the third grade of a state high school, geographically located in the city of Natal, Rio Grande do Norte. The theoretical framework comes from the assumptions of cognitive social interactionism to understand language, and it is also based on the ideas of Bakhtin (1992) and Vygotsky (1998), which enabled us to understand the social interaction and the Theory of Aesthetics Reception and Aesthetic Effect with Jauss (1979) and Iser (1999), which provided a better understanding of aesthetic experience, aesthetic effects and production of meaning. The methodological approach assumes a qualitative nature and an interpretive bias, accomplished through interviews, observation, questionnaire and application of a set of investigative activities, such as introductory exposition of themes, handing out of images and mediation process. This research is the result of a research-action process in a pedagogical intervention in a state school. The results indicate that the interactional linguistic resources used by the speakers demonstrated lack of prior knowledge and repertoire regarding image reading, which initially led them to do a cursory reading. It was evident that the respondents were unaware of the initial proposal. However, throughout the meetings, it was possible to realize their transformation, because the pre-established concepts were analyzed with the help of mediation, so that the group felt more autonomous and safe to read images at the end. The survey also showed significant data, so that the school could develop new methods of teaching televisual reading.
Resumo:
The Benzylpenicillin (PENG) have been as the active ingredient in veterinary medicinal products, to increase productivity, due to its therapeutic properties. However, one of unfortunate quality and used indiscriminately, resulting in residues in foods exposed to human consumption, especially in milk that is essential to the diet of children and the ageing. Thus, it is indispensable to develop new methods able to detect this waste food, at levels that are toxic to human health, in order to contribute to the food security of consumers and collaborate with regulatory agencies in an efficient inspection. In this work, were developed methods for the quality control of veterinary drugs based on Benzylpenicillin (PENG) that are used in livestock production. Additionally, were validated methodologies for identifying and quantifying the antibiotic residues in milk bovine and caprine. For this, the analytical control was performed two steps. At first, the groups of samples of medicinal products I, II, III, IV and V, individually, were characterized by medium infrared spectroscopy (4000 – 600 cm-1). Besides, 37 samples, distributed in these groups, were analyzed by spectroscopy in the ultraviolet and near infrared region (UV VIS NIR) and Ultra Fast Liquid Chromatograph coupled to linear arrangement photodiodes (UFLC-DAD). The results of the characterization indicated similarities, between PENG and reference standard samples, primarily in regions of 1818 to 1724 cm-1 of ν C=O that shows primary amides features of PENG. The method by UFLC-DAD presented R on 0.9991. LOD of 7.384 × 10-4 μg mL-1. LOQ of 2.049 × 10-3 μg mL-1. The analysis shows that 62.16% the samples presented purity ≥ 81.21%. The method by spectroscopy in the UV VIS NIR presented medium error ≤ 8 – 12% between the reference and experimental criteria, indicating is a secure choice for rapid determination of PENG. In the second stage, was acquiring a method for the extraction and isolation of PENG by the addition of buffer McIlvaine, used for precipitation of proteins total, at pH 4.0. The results showed excellent recovery values PENG, being close to 92.05% of samples of bovine milk (method 1). While samples of milk goats (method 2) the recovery of PENG were 95.83%. The methods for UFLC-DAD have been validated in accordance with the maximum residue limit (LMR) of 4 μg Kg-1 standardized by CAC/GL16. Validation of the method 1 indicated R by 0.9975. LOD of 7.246 × 10-4 μg mL-1. LOQ de 2.196 × 10-3 μg mL-1. The application of the method 1 showed that 12% the samples presented concentration of residues of PENG > LMR. The method 2 indicated R by 0.9995. LOD 8.251 × 10-4 μg mL-1. LOQ de 2.5270 × 10-3 μg mL-1. The application of the method showed that 15% of the samples were above the tolerable. The comparative analysis between the methods pointed better validation for LCP samples, because the reduction of the matrix effect, on this account the tcalculs < ttable, caused by the increase of recovery of the PENG. In this mode, all the operations developed to deliver simplicity, speed, selectivity, reduced analysis time and reagent use and toxic solvents, particularly if compared to the established methodologies.
Resumo:
The Benzylpenicillin (PENG) have been as the active ingredient in veterinary medicinal products, to increase productivity, due to its therapeutic properties. However, one of unfortunate quality and used indiscriminately, resulting in residues in foods exposed to human consumption, especially in milk that is essential to the diet of children and the ageing. Thus, it is indispensable to develop new methods able to detect this waste food, at levels that are toxic to human health, in order to contribute to the food security of consumers and collaborate with regulatory agencies in an efficient inspection. In this work, were developed methods for the quality control of veterinary drugs based on Benzylpenicillin (PENG) that are used in livestock production. Additionally, were validated methodologies for identifying and quantifying the antibiotic residues in milk bovine and caprine. For this, the analytical control was performed two steps. At first, the groups of samples of medicinal products I, II, III, IV and V, individually, were characterized by medium infrared spectroscopy (4000 – 600 cm-1). Besides, 37 samples, distributed in these groups, were analyzed by spectroscopy in the ultraviolet and near infrared region (UV VIS NIR) and Ultra Fast Liquid Chromatograph coupled to linear arrangement photodiodes (UFLC-DAD). The results of the characterization indicated similarities, between PENG and reference standard samples, primarily in regions of 1818 to 1724 cm-1 of ν C=O that shows primary amides features of PENG. The method by UFLC-DAD presented R on 0.9991. LOD of 7.384 × 10-4 μg mL-1. LOQ of 2.049 × 10-3 μg mL-1. The analysis shows that 62.16% the samples presented purity ≥ 81.21%. The method by spectroscopy in the UV VIS NIR presented medium error ≤ 8 – 12% between the reference and experimental criteria, indicating is a secure choice for rapid determination of PENG. In the second stage, was acquiring a method for the extraction and isolation of PENG by the addition of buffer McIlvaine, used for precipitation of proteins total, at pH 4.0. The results showed excellent recovery values PENG, being close to 92.05% of samples of bovine milk (method 1). While samples of milk goats (method 2) the recovery of PENG were 95.83%. The methods for UFLC-DAD have been validated in accordance with the maximum residue limit (LMR) of 4 μg Kg-1 standardized by CAC/GL16. Validation of the method 1 indicated R by 0.9975. LOD of 7.246 × 10-4 μg mL-1. LOQ de 2.196 × 10-3 μg mL-1. The application of the method 1 showed that 12% the samples presented concentration of residues of PENG > LMR. The method 2 indicated R by 0.9995. LOD 8.251 × 10-4 μg mL-1. LOQ de 2.5270 × 10-3 μg mL-1. The application of the method showed that 15% of the samples were above the tolerable. The comparative analysis between the methods pointed better validation for LCP samples, because the reduction of the matrix effect, on this account the tcalculs < ttable, caused by the increase of recovery of the PENG. In this mode, all the operations developed to deliver simplicity, speed, selectivity, reduced analysis time and reagent use and toxic solvents, particularly if compared to the established methodologies.
Resumo:
The uncontrolled disposal of wastewaters containing phenolic compounds by the industry has caused irreversible damage to the environment. Because of this, it is now mandatory to develop new methods to treat these effluents before they are disposed of. One of the most promising and low cost approaches is the degradation of phenolic compounds via photocatalysis. This work, in particular, has as the main goal, the customization of a bench scale photoreactor and the preparation of catalysts via utilization of char originated from the fast pyrolysis of sewage sludge. The experiments were carried out at constant temperature (50°C) under oxygen (410, 515, 650 and 750 ml min-1). The reaction took place in the liquid phase (3.4 liters), where the catalyst concentration was 1g L-1 and the initial concentration of phenol was 500 mg L-1 and the reaction time was set to 3 hours. A 400 W lamp was adapted to the reactor. The flow of oxygen was optimized to 650 ml min-1. The pH of the liquid and the nature of the catalyst (acidified and calcined palygorskite, palygorskite impregnated with 3.8% Fe and the pyrolysis char) were investigated. The catalytic materials were characterized by XRD, XRF, and BET. In the process of photocatalytic degradation of phenol, the results showed that the pH has a significant influence on the phenol conversion, with best results for pH equal to 5.5. The phenol conversion ranged from 51.78% for the char sewage sludge to 58.02% (for palygorskite acidified calcined). Liquid samples analyzed by liquid chromatography and the following compounds were identified: hydroquinone, catechol and maleic acid. A mechanism of the reaction was proposed, whereas the phenol is transformed into the homogeneous phase and the others react on the catalyst surface. For the latter, the Langmuir-Hinshelwood model was applied, whose mass balances led to a system of differential equations and these were solved using numerical methods in order to get estimates for the kinetic and adsorption parameters. The model was adjusted satisfactorily to the experimental results. From the proposed mechanism and the operating conditions used in this study, the most favored step, regardless of the catalyst, was the acid group (originated from quinone compounds), being transformed into CO2 and water, whose rate constant k4 presented value of 0.578 mol L-1 min-1 for acidified calcined palygorskite, 0.472 mol L-1 min-1 for Fe2O3/palygorskite and 1.276 mol L-1 min-1 for the sludge to char, the latter being the best catalyst for mineralization of acid to CO2 and water. The quinones were adsorbed to the acidic sites of the calcined palygorskite and Fe2O3/palygorskite whose adsorption constants were similar (~ 4.45 L mol-1) and higher than that of the sewage sludge char (3.77 L mol-1).
Resumo:
The uncontrolled disposal of wastewaters containing phenolic compounds by the industry has caused irreversible damage to the environment. Because of this, it is now mandatory to develop new methods to treat these effluents before they are disposed of. One of the most promising and low cost approaches is the degradation of phenolic compounds via photocatalysis. This work, in particular, has as the main goal, the customization of a bench scale photoreactor and the preparation of catalysts via utilization of char originated from the fast pyrolysis of sewage sludge. The experiments were carried out at constant temperature (50°C) under oxygen (410, 515, 650 and 750 ml min-1). The reaction took place in the liquid phase (3.4 liters), where the catalyst concentration was 1g L-1 and the initial concentration of phenol was 500 mg L-1 and the reaction time was set to 3 hours. A 400 W lamp was adapted to the reactor. The flow of oxygen was optimized to 650 ml min-1. The pH of the liquid and the nature of the catalyst (acidified and calcined palygorskite, palygorskite impregnated with 3.8% Fe and the pyrolysis char) were investigated. The catalytic materials were characterized by XRD, XRF, and BET. In the process of photocatalytic degradation of phenol, the results showed that the pH has a significant influence on the phenol conversion, with best results for pH equal to 5.5. The phenol conversion ranged from 51.78% for the char sewage sludge to 58.02% (for palygorskite acidified calcined). Liquid samples analyzed by liquid chromatography and the following compounds were identified: hydroquinone, catechol and maleic acid. A mechanism of the reaction was proposed, whereas the phenol is transformed into the homogeneous phase and the others react on the catalyst surface. For the latter, the Langmuir-Hinshelwood model was applied, whose mass balances led to a system of differential equations and these were solved using numerical methods in order to get estimates for the kinetic and adsorption parameters. The model was adjusted satisfactorily to the experimental results. From the proposed mechanism and the operating conditions used in this study, the most favored step, regardless of the catalyst, was the acid group (originated from quinone compounds), being transformed into CO2 and water, whose rate constant k4 presented value of 0.578 mol L-1 min-1 for acidified calcined palygorskite, 0.472 mol L-1 min-1 for Fe2O3/palygorskite and 1.276 mol L-1 min-1 for the sludge to char, the latter being the best catalyst for mineralization of acid to CO2 and water. The quinones were adsorbed to the acidic sites of the calcined palygorskite and Fe2O3/palygorskite whose adsorption constants were similar (~ 4.45 L mol-1) and higher than that of the sewage sludge char (3.77 L mol-1).
Resumo:
The successful performance of a hydrological model is usually challenged by the quality of the sensitivity analysis, calibration and uncertainty analysis carried out in the modeling exercise and subsequent simulation results. This is especially important under changing climatic conditions where there are more uncertainties associated with climate models and downscaling processes that increase the complexities of the hydrological modeling system. In response to these challenges and to improve the performance of the hydrological models under changing climatic conditions, this research proposed five new methods for supporting hydrological modeling. First, a design of experiment aided sensitivity analysis and parameterization (DOE-SAP) method was proposed to investigate the significant parameters and provide more reliable sensitivity analysis for improving parameterization during hydrological modeling. The better calibration results along with the advanced sensitivity analysis for significant parameters and their interactions were achieved in the case study. Second, a comprehensive uncertainty evaluation scheme was developed to evaluate three uncertainty analysis methods, the sequential uncertainty fitting version 2 (SUFI-2), generalized likelihood uncertainty estimation (GLUE) and Parameter solution (ParaSol) methods. The results showed that the SUFI-2 performed better than the other two methods based on calibration and uncertainty analysis results. The proposed evaluation scheme demonstrated that it is capable of selecting the most suitable uncertainty method for case studies. Third, a novel sequential multi-criteria based calibration and uncertainty analysis (SMC-CUA) method was proposed to improve the efficiency of calibration and uncertainty analysis and control the phenomenon of equifinality. The results showed that the SMC-CUA method was able to provide better uncertainty analysis results with high computational efficiency compared to the SUFI-2 and GLUE methods and control parameter uncertainty and the equifinality effect without sacrificing simulation performance. Fourth, an innovative response based statistical evaluation method (RESEM) was proposed for estimating the uncertainty propagated effects and providing long-term prediction for hydrological responses under changing climatic conditions. By using RESEM, the uncertainty propagated from statistical downscaling to hydrological modeling can be evaluated. Fifth, an integrated simulation-based evaluation system for uncertainty propagation analysis (ISES-UPA) was proposed for investigating the effects and contributions of different uncertainty components to the total propagated uncertainty from statistical downscaling. Using ISES-UPA, the uncertainty from statistical downscaling, uncertainty from hydrological modeling, and the total uncertainty from two uncertainty sources can be compared and quantified. The feasibility of all the methods has been tested using hypothetical and real-world case studies. The proposed methods can also be integrated as a hydrological modeling system to better support hydrological studies under changing climatic conditions. The results from the proposed integrated hydrological modeling system can be used as scientific references for decision makers to reduce the potential risk of damages caused by extreme events for long-term water resource management and planning.
Resumo:
The period from 1874 to 1901 was a time of significant transition in the economic and political life of Newfoundland. Twenty years into responsible government and with Confederation on the backburner, the colony’s politicians turned their attention to economic diversification, landward development and carving out the island’s place in the British Empire. The period saw both economic prosperity and retrenchment; the construction of a trans-insular railway; the adoption of policies to foster agriculture, forestry, manufacturing and mining; and diplomatic efforts to resolve France’s outstanding claims on the northwest coast of the island. At the same time, the government made an attempt to intervene directly in its primary industry, the fisheries. It created a Fisheries Commission in 1889 that recommended conservation measures and artificial propagation as ways to restore the health of some of the island’s fish stocks. They also proposed new methods of curing, packaging and marketing Newfoundland’s cod, as well as a complete overhaul of the truck system. A major player in both the public and private debates surrounding all of these subjects was the Reverend Moses Harvey. Along with being minister of the Free Church of Scotland in St. John’s, Harvey was one of Newfoundland’s most active promoters in the late nineteenth century. He served as the media mouthpiece for both Prime Minister William Whiteway and Prime Minister Robert Thorburn; editing the Evening Mercury – the official organ of the Liberal Party and then the Reform Party – from 1882 to 1883 and 1885 until 1890. As well, Harvey wrote regular columns on Newfoundland issues for newspapers in London, New York, Boston, Montreal, Toronto, and Halifax. He also produced numerous books, articles, encyclopedia entries, and travel guides outlining the island’s attractions and its vast economic potential. In short, Harvey made a significant contribution in shaping the way residents and the outside world viewed Newfoundland during this period. This thesis examines late nineteenth-century Newfoundland through the writing of Moses Harvey. The biographical approach offers a fuller, more nuanced account of some of the major historical themes of the period including the politics of progress, opening up the interior, railway construction and attitudes toward the fisheries. It also provides an insider’s prospective on what led to some of the major political decisions, policy positions or compromises taken by the Whiteway and Thorburn governments. Finally, a more detailed review of Harvey’s work exposes the practical and political differences that he had with people like D.W. Prowse and Bishop Michael Howley. While these so-called “boomers” in Newfoundland’s historiography agreed on broad themes, they parted ways over what should be done with the fisheries and how best to channel the colony’s growing sense of nationalism.
Resumo:
lmage super-resolution is defined as a class of techniques that enhance the spatial resolution of images. Super-resolution methods can be subdivided in single and multi image methods. This thesis focuses on developing algorithms based on mathematical theories for single image super resolution problems. lndeed, in arder to estimate an output image, we adopta mixed approach: i.e., we use both a dictionary of patches with sparsity constraints (typical of learning-based methods) and regularization terms (typical of reconstruction-based methods). Although the existing methods already per- form well, they do not take into account the geometry of the data to: regularize the solution, cluster data samples (samples are often clustered using algorithms with the Euclidean distance as a dissimilarity metric), learn dictionaries (they are often learned using PCA or K-SVD). Thus, state-of-the-art methods still suffer from shortcomings. In this work, we proposed three new methods to overcome these deficiencies. First, we developed SE-ASDS (a structure tensor based regularization term) in arder to improve the sharpness of edges. SE-ASDS achieves much better results than many state-of-the- art algorithms. Then, we proposed AGNN and GOC algorithms for determining a local subset of training samples from which a good local model can be computed for recon- structing a given input test sample, where we take into account the underlying geometry of the data. AGNN and GOC methods outperform spectral clustering, soft clustering, and geodesic distance based subset selection in most settings. Next, we proposed aSOB strategy which takes into account the geometry of the data and the dictionary size. The aSOB strategy outperforms both PCA and PGA methods. Finally, we combine all our methods in a unique algorithm, named G2SR. Our proposed G2SR algorithm shows better visual and quantitative results when compared to the results of state-of-the-art methods.
Resumo:
We have harnessed two reactions catalyzed by the enzyme sortase A and applied them to generate new methods for the purification and site-selective modification of recombinant protein therapeutics.
We utilized native peptide ligation —a well-known function of sortase A— to attach a small molecule drug specifically to the carboxy-terminus of a recombinant protein. By combining this reaction with the unique phase behavior of elastin-like polypeptides, we developed a protocol that produces homogenously-labeled protein-small molecule conjugates using only centrifugation. The same reaction can be used to produce unmodified therapeutic proteins simply by substituting a single reactant. The isolated proteins or protein-small molecule conjugates do not have any exogenous purification tags, eliminating the potential influence of these tags on bioactivity. Because both unmodified and modified proteins are produced by a general process that is the same for any protein of interest and does not require any chromatography, the time, effort, and cost associated with protein purification and modification is greatly reduced.
We also developed an innovative and unique method that attaches a tunable number of drug molecules to any recombinant protein of interest in a site-specific manner. Although the ability of sortase A to carry out native peptide ligation is widely used, we demonstrated that Sortase A is also capable of attaching small molecules to proteins through an isopeptide bond at lysine side chains within a unique amino acid sequence. This reaction —isopeptide ligation— is a new site-specific conjugation method that is orthogonal to all available protein-small conjugation technologies and is the first site-specific conjugation method that attaches the payload to lysine residues. We show that isopeptide ligation can be applied broadly to peptides, proteins, and antibodies using a variety of small molecule cargoes to efficiently generate stable conjugates. We thoroughly assessed the site-selectivity of this reaction using a variety of analytical methods and showed that in many cases the reaction is site-specific for lysines in flexible, disordered regions of the substrate proteins. Finally, we showed that isopeptide ligation can be used to create clinically-relevant antibody-drug conjugates that have potent cytotoxicity towards cancerous cells
Improving the care of preterm infants: before, during, and after, stabilisation in the delivery room
Resumo:
Introduction Up to 10% of infants require stabilisation during transition to extrauterine life. Enhanced monitoring of cardiorespiratory parameters during this time may improve stabilisation outcomes. In addition, technology may facilitate improved preparation for delivery room stabilisation as well as NICU procedures, through educational techniques. Aim To improve infant care 1) before birth via improved training, 2) during stabilisation via enhanced physiological monitoring and improved practice, and 3) after delivery, in the neonatal intensive care unit (NICU), via improved procedural care. Methods A multifaceted approach was utilised including; a combination of questionnaire based surveys, mannequin-based investigations, prospective observational investigations, and a randomised controlled trial involving preterm infants less than 32 weeks in the delivery room. Forms of technology utilised included; different types of mannequins including a CO2 producing mannequin, qualitative end tidal CO2 (EtCO2) detectors, a bespoke quantitative EtCO2 detector, and annotated videos of infant stabilisation as well as NICU procedures Results Manual ventilation improved with the use of EtCO2 detection, and was positively assessed by trainees. Quantitative EtCO2 detection in the delivery room is feasible, EtCO2 increased over the first 4 minutes of life in preterm infants, and EtCO2 was higher in preterm infants who were intubated. Current methods of heart rate assessment were found to be unreliable. Electrocardiography (ECG) application warrants further evaluation. Perfusion index (PI) monitoring utilised in the delivery room was feasible. Video recording technology was utilised in several ways. This technology has many potential benefits, including debriefing and coaching in procedural healthcare, and warrants further evaluation. Parents would welcome the introduction of webcams in the NICU. Conclusions I have evaluated new methods of improving infant care before, during, and after stabilisation in the DR. Specifically, I have developed novel educational tools to facilitate training, and evaluated EtCO2, PI, and ECG during infant stabilisation. I have identified barriers in using webcams in the NICU, to now be addressed prior to webcam implementation.
Resumo:
New methods for creating theranostic systems with simultaneous encapsulation of therapeutic, diagnostic, and targeting agents are much sought after. This work reports for the first time the use of coaxial electrospinning to prepare such systems in the form of core–shell fibers. Eudragit S100 was used to form the shell of the fibers, while the core comprised poly(ethylene oxide) loaded with the magnetic resonance contrast agent Gd(DTPA) (Gd(III) diethylenetriaminepentaacetate hydrate) and indomethacin as a model therapeutic agent. The fibers had linear cylindrical morphologies with clear core–shell structures, as demonstrated by electron microscopy. X-ray diffraction and differential scanning calorimetry proved that both indomethacin and Gd(DTPA) were present in the fibers in the amorphous physical form. This is thought to be a result of intermolecular interactions between the different components, the presence of which was suggested by infrared spectroscopy. In vitro dissolution tests indicated that the fibers could provide targeted release of the active ingredients through a combined mechanism of erosion and diffusion. The proton relaxivities for Gd(DTPA) released from the fibers into tris buffer increased (r1 = 4.79–9.75 s–1 mM–1; r2 = 7.98–14.22 s–1 mM–1) compared with fresh Gd(DTPA) (r1 = 4.13 s–1 mM–1 and r2 = 4.40 s–1 mM–1), which proved that electrospinning has not diminished the contrast properties of the complex. The new systems reported herein thus offer a new platform for delivering therapeutic and imaging agents simultaneously to the colon.
Resumo:
This article argues that The Toughest Indian in the World (2000) by Native-American author Sherman Alexie combines elements of his tribal (oral) tradition with others coming from the Western (literary) short-story form. Like other Native writers — such as Momaday, Silko or Vizenor — , Alexie is seen to bring into his short fiction characteristics of his people’s oral storytelling that make it much more dialogical and participatory. Among the author’s narrative techniques reminiscent of the oral tradition, aggregative repetitions of patterned thoughts and strategically-placed indeterminacies play a major role in encouraging his readers to engage in intellectual and emotional exchanges with the stories. Assisted by the ideas of theorists such as Ong (1988), Evers and Toelken (2001), and Teuton (2008), this article shows how Alexie’s short fiction is enriched and revitalized by the incorporation of oral elements. The essay also claims that new methods of analysis and assessment may be needed for this type of bicultural artistic forms. Despite the differences between the two modes of communication, Alexie succeeds in blending features and techniques from both traditions, thus creating a new hybrid short-story form that suitably conveys the trying experiences faced by his characters.
Resumo:
There has been an increasing interest in the development of new methods using Pareto optimality to deal with multi-objective criteria (for example, accuracy and time complexity). Once one has developed an approach to a problem of interest, the problem is then how to compare it with the state of art. In machine learning, algorithms are typically evaluated by comparing their performance on different data sets by means of statistical tests. Standard tests used for this purpose are able to consider jointly neither performance measures nor multiple competitors at once. The aim of this paper is to resolve these issues by developing statistical procedures that are able to account for multiple competing measures at the same time and to compare multiple algorithms altogether. In particular, we develop two tests: a frequentist procedure based on the generalized likelihood-ratio test and a Bayesian procedure based on a multinomial-Dirichlet conjugate model. We further extend them by discovering conditional independences among measures to reduce the number of parameters of such models, as usually the number of studied cases is very reduced in such comparisons. Data from a comparison among general purpose classifiers is used to show a practical application of our tests.
Resumo:
Data mining can be defined as the extraction of implicit, previously un-known, and potentially useful information from data. Numerous re-searchers have been developing security technology and exploring new methods to detect cyber-attacks with the DARPA 1998 dataset for Intrusion Detection and the modified versions of this dataset KDDCup99 and NSL-KDD, but until now no one have examined the performance of the Top 10 data mining algorithms selected by experts in data mining. The compared classification learning algorithms in this thesis are: C4.5, CART, k-NN and Naïve Bayes. The performance of these algorithms are compared with accuracy, error rate and average cost on modified versions of NSL-KDD train and test dataset where the instances are classified into normal and four cyber-attack categories: DoS, Probing, R2L and U2R. Additionally the most important features to detect cyber-attacks in all categories and in each category are evaluated with Weka’s Attribute Evaluator and ranked according to Information Gain. The results show that the classification algorithm with best performance on the dataset is the k-NN algorithm. The most important features to detect cyber-attacks are basic features such as the number of seconds of a network connection, the protocol used for the connection, the network service used, normal or error status of the connection and the number of data bytes sent. The most important features to detect DoS, Probing and R2L attacks are basic features and the least important features are content features. Unlike U2R attacks, where the content features are the most important features to detect attacks.