878 resultados para onion routing
Resumo:
We studied the effect of Silicon (Si) on Casparian band (CB) development, chemical composition of the exodermal CB and Si deposition across the root in the Si accumulators rice and maize and the Si non-accumulator onion. Plants were cultivated in nutrient solution with and without Si supply. The CB development was determined in stained root cross-sections. The outer part of the roots containing the exodermis was isolated after enzymatic treatment. The exodermal suberin was transesterified with MeOH/BF3 and the chemical composition was measured using gas chromatography-mass spectroscopy (GC-MS) and flame ionization detector (GC-FID). Laser ablation-inductively coupled plasma-mass spectroscopy (LA-ICP-MS) was used to determine the Si deposition across root cross sections. Si promoted CB formation in the roots of Si-accumulator and Si non-accumulator species. The exodermal suberin was decreased in rice and maize due to decreased amounts of aromatic suberin fractions. Si did not affect the concentration of lignin and lignin-like polymers in the outer part of rice, maize and onion roots. The highest Si depositions were found in the tissues containing CB. These data along with literature were used to suggest a mechanism how Si promotes the CB development by forming complexes with phenols.
Resumo:
Endophytic fungi, which live within host plant tissues without causing any visible symptom of infection, are important mutualists that mediate plant-herbivore interactions. Thrips tabaci (Lindeman) is one of the key pests of onion, Allium cepa L., an economically important agricultural crop cultivated worldwide. However, information on endophyte colonization of onions, and their impacts on the biology of thrips feeding on them, is lacking. We tested the colonization of onion plants by selected fungal endophyte isolates using two inoculation methods. The effects of inoculated endophytes on T. tabaci infesting onion were also examined. Seven fungal endophytes used in our study were able to colonize onion plants either by the seed or seedling inoculation methods. Seed inoculation resulted in 1.47 times higher mean percentage post-inoculation recovery of all the endophytes tested as compared to seedling inoculation. Fewer thrips were observed on plants inoculated with Clonostachys rosea ICIPE 707, Trichoderma asperellum M2RT4, Trichoderma atroviride ICIPE 710, Trichoderma harzianum 709, Hypocrea lixii F3ST1 and Fusarium sp. ICIPE 712 isolates as compared to those inoculated with Fusarium sp. ICIPE 717 and the control treatments. Onion plants colonized by C. rosea ICIPE 707, T. asperellum M2RT4, T. atroviride ICIPE 710 and H. lixii F3ST1 had significantly lower feeding punctures as compared to the other treatments. Among the isolates tested, the lowest numbers of eggs were laid by T. tabaci on H. lixii F3ST1 and C. rosea ICIPE 707 inoculated plants. These results extend the knowledge on colonization of onions by fungal endophytes and their effects on Thrips tabaci.
Resumo:
O elevado custo da operação de recolha de resíduos urbanos e a necessidade de cumprir metas dispostas em instrumentos legais são duas motivações que conduzem à necessidade de otimizar o serviço da recolha de resíduos. A otimização da recolha de resíduos é um problema de elevada complexidade de resolução que envolve a análise de redes de transporte. O presente trabalho propõe soluções de otimização da recolha de resíduos urbanos indiferenciados, a partir de um caso de estudo: o percurso RSU I 06 do município de Aveiro. Para este efeito, recorreu-se a uma aplicação informática de representação e análise geográfica: software ArcGIS denominada ArcMap e sua extensão Network Analyst, desenvolvida para calcular circuitos otimizados entre pontos de interesse. O trabalho realizado de aplicação do Network Analyst inclui a apresentação de duas das suas funcionalidades (Route e Vehicle Routing Problem). Em relação ao atual circuito de recolha e com base nos ensaios efetuados, foi possível concluir que esta aplicação permite obter circuitos de recolha otimizados mais curtos ou com menor duração. Contudo, ao nível da gestão permitiu concluir que, com a atual capacidade de contentorização, seria viável reduzir a frequência de recolha de seis vezes por semana para metade, dividindo a área de recolha em duas áreas, de acordo com as necessidades de cada local, reduzindo ainda mais o esforço de recolha. A aplicação do Network Analyst ao caso de estudo, permitiu concluir que é um software com muito interesse no processo de gestão da recolha de resíduos urbanos, apesar de apresentar algumas restrições de aplicação e que a qualidade/eficácia do procedimento de otimização depende da qualidade dos dados de entrada, em particular do descritivo geográfico disponível para os arruamentos e, em larga medida, também depende do modelo de gestão considerado.
Resumo:
The wide adaptation of Internet Protocol (IP) as de facto protocol for most communication networks has established a need for developing IP capable data link layer protocol solutions for Machine to machine (M2M) and Internet of Things (IoT) networks. However, the wireless networks used for M2M and IoT applications usually lack the resources commonly associated with modern wireless communication networks. The existing IP capable data link layer solutions for wireless IoT networks provide the necessary overhead minimising and frame optimising features, but are often built to be compatible only with IPv6 and specific radio platforms. The objective of this thesis is to design IPv4 compatible data link layer for Netcontrol Oy's narrow band half-duplex packet data radio system. Based on extensive literature research, system modelling and solution concept testing, this thesis proposes the usage of tunslip protocol as the basis for the system data link layer protocol development. In addition to the functionality of tunslip, this thesis discusses the additional network, routing, compression, security and collision avoidance changes required to be made to the radio platform in order for it to be IP compatible while still being able to maintain the point-to-multipoint and multi-hop network characteristics. The data link layer design consists of the radio application, dynamic Maximum Transmission Unit (MTU) optimisation daemon and the tunslip interface. The proposed design uses tunslip for creating an IP capable data link protocol interface. The radio application receives data from tunslip and compresses the packets and uses the IP addressing information for radio network addressing and routing before forwarding the message to radio network. The dynamic MTU size optimisation daemon controls the tunslip interface maximum MTU size according to the link quality assessment calculated from the radio network diagnostic data received from the radio application. For determining the usability of tunslip as the basis for data link layer protocol, testing of the tunslip interface is conducted with both IEEE 802.15.4 radios and packet data radios. The test cases measure the radio network usability for User Datagram Protocol (UDP) based applications without applying any header or content compression. The test results for the packet data radios reveal that the typical success rate for packet reception through a single-hop link is above 99% with a round-trip-delay of 0.315s for 63B packets.
O problema de alocação de berços: um estudo das heurísticas simulated annealing e algoritmo genético
Resumo:
Este trabalho apresenta um estudo de caso das heurísticas Simulated Annealing e Algoritmo Genético para um problema de grande relevância encontrado no sistema portuário, o Problema de Alocação em Berços. Esse problema aborda a programação e a alocação de navios às áreas de atracação ao longo de um cais. A modelagem utilizada nesta pesquisa é apresentada por Mauri (2008) [28] que trata do problema como uma Problema de Roteamento de Veículos com Múltiplas Garagens e sem Janelas de Tempo. Foi desenvolvido um ambiente apropriado para testes de simulação, onde o cenário de análise foi constituido a partir de situações reais encontradas na programação de navios de um terminal de contêineres. Os testes computacionais realizados mostram a performance das heurísticas em relação a função objetivo e o tempo computacional, a m de avaliar qual das técnicas apresenta melhores resultados.
Resumo:
This thesis presents approximation algorithms for some NP-Hard combinatorial optimization problems on graphs and networks; in particular, we study problems related to Network Design. Under the widely-believed complexity-theoretic assumption that P is not equal to NP, there are no efficient (i.e., polynomial-time) algorithms that solve these problems exactly. Hence, if one desires efficient algorithms for such problems, it is necessary to consider approximate solutions: An approximation algorithm for an NP-Hard problem is a polynomial time algorithm which, for any instance of the problem, finds a solution whose value is guaranteed to be within a multiplicative factor of the value of an optimal solution to that instance. We attempt to design algorithms for which this factor, referred to as the approximation ratio of the algorithm, is as small as possible. The field of Network Design comprises a large class of problems that deal with constructing networks of low cost and/or high capacity, routing data through existing networks, and many related issues. In this thesis, we focus chiefly on designing fault-tolerant networks. Two vertices u,v in a network are said to be k-edge-connected if deleting any set of k − 1 edges leaves u and v connected; similarly, they are k-vertex connected if deleting any set of k − 1 other vertices or edges leaves u and v connected. We focus on building networks that are highly connected, meaning that even if a small number of edges and nodes fail, the remaining nodes will still be able to communicate. A brief description of some of our results is given below. We study the problem of building 2-vertex-connected networks that are large and have low cost. Given an n-node graph with costs on its edges and any integer k, we give an O(log n log k) approximation for the problem of finding a minimum-cost 2-vertex-connected subgraph containing at least k nodes. We also give an algorithm of similar approximation ratio for maximizing the number of nodes in a 2-vertex-connected subgraph subject to a budget constraint on the total cost of its edges. Our algorithms are based on a pruning process that, given a 2-vertex-connected graph, finds a 2-vertex-connected subgraph of any desired size and of density comparable to the input graph, where the density of a graph is the ratio of its cost to the number of vertices it contains. This pruning algorithm is simple and efficient, and is likely to find additional applications. Recent breakthroughs on vertex-connectivity have made use of algorithms for element-connectivity problems. We develop an algorithm that, given a graph with some vertices marked as terminals, significantly simplifies the graph while preserving the pairwise element-connectivity of all terminals; in fact, the resulting graph is bipartite. We believe that our simplification/reduction algorithm will be a useful tool in many settings. We illustrate its applicability by giving algorithms to find many trees that each span a given terminal set, while being disjoint on edges and non-terminal vertices; such problems have applications in VLSI design and other areas. We also use this reduction algorithm to analyze simple algorithms for single-sink network design problems with high vertex-connectivity requirements; we give an O(k log n)-approximation for the problem of k-connecting a given set of terminals to a common sink. We study similar problems in which different types of links, of varying capacities and costs, can be used to connect nodes; assuming there are economies of scale, we give algorithms to construct low-cost networks with sufficient capacity or bandwidth to simultaneously support flow from each terminal to the common sink along many vertex-disjoint paths. We further investigate capacitated network design, where edges may have arbitrary costs and capacities. Given a connectivity requirement R_uv for each pair of vertices u,v, the goal is to find a low-cost network which, for each uv, can support a flow of R_uv units of traffic between u and v. We study several special cases of this problem, giving both algorithmic and hardness results. In addition to Network Design, we consider certain Traveling Salesperson-like problems, where the goal is to find short walks that visit many distinct vertices. We give a (2 + epsilon)-approximation for Orienteering in undirected graphs, achieving the best known approximation ratio, and the first approximation algorithm for Orienteering in directed graphs. We also give improved algorithms for Orienteering with time windows, in which vertices must be visited between specified release times and deadlines, and other related problems. These problems are motivated by applications in the fields of vehicle routing, delivery and transportation of goods, and robot path planning.
Resumo:
Reconstructing Northern Hemisphere ice-sheet oscillations and meltwater routing to the ocean is important to better understand the mechanisms behind abrupt climate changes. To date, research efforts have mainly focused on the North American (Laurentide) ice-sheets (LIS), leaving the potential role of the European Ice Sheet (EIS), and of the Scandinavian ice-sheet (SIS) in particular, largely unexplored. Using neodymium isotopes in detrital sediments deposited off the Channel River, we provide a continuous and well-dated record for the evolution of the EIS southern margin through the end of the last glacial period and during the deglaciation. Our results reveal that the evolution of EIS margins was accompanied with substantial ice recession (especially of the SIS) and simultaneous release of meltwater to the North Atlantic. These events occurred both in the course of the EIS to its LGM position (i.e., during Heinrich Stadial –HS– 3 and HS2; ∼31–29 ka and ∼26–23 ka, respectively) and during the deglaciation (i.e., at ∼22 ka, ∼20–19 ka and from 18.2 ± 0.2 to 16.7 ± 0.2 ka that corresponds to the first part of HS1). The deglaciation was discontinuous in character, and similar in timing to that of the southern LIS margin, with moderate ice-sheet retreat (from 22.5 ± 0.2 ka in the Baltic lowlands) as soon as the northern summer insolation increase (from ∼23 ka) and an acceleration of the margin retreat thereafter (from ∼20 ka). Importantly, our results show that EIS retreat events and release of meltwater to the North Atlantic during the deglaciation coincide with AMOC destabilisation and interhemispheric climate changes. They thus suggest that the EIS, together with the LIS, could have played a critical role in the climatic reorganization that accompanied the last deglaciation. Finally, our data suggest that meltwater discharges to the North Atlantic produced by large-scale recession of continental parts of Northern Hemisphere ice sheets during HS, could have been a possible source for the oceanic perturbations (i.e., AMOC shutdown) responsible for the marine-based ice stream purge cycle, or so-called HE's, that punctuate the last glacial period.
Resumo:
Os mecanismos e técnicas do domínio de Tempo-Real são utilizados quando existe a necessidade de um sistema, seja este um sistema embutido ou de grandes dimensões, possuir determinadas características que assegurem a qualidade de serviço do sistema. Os Sistemas de Tempo-Real definem-se assim como sistemas que possuem restrições temporais rigorosas, que necessitam de apresentar altos níveis de fiabilidade de forma a garantir em todas as instâncias o funcionamento atempado do sistema. Devido à crescente complexidade dos sistemas embutidos, empregam-se frequentemente arquiteturas distribuídas, onde cada módulo é normalmente responsável por uma única função. Nestes casos existe a necessidade de haver um meio de comunicação entre estes, de forma a poderem comunicar entre si e cumprir a funcionalidade desejadas. Devido à sua elevada capacidade e baixo custo a tecnologia Ethernet tem vindo a ser alvo de estudo, com o objetivo de a tornar num meio de comunicação com a qualidade de serviço característica dos sistemas de tempo-real. Como resposta a esta necessidade surgiu na Universidade de Aveiro, o Switch HaRTES, o qual possui a capacidade de gerir os seus recursos dinamicamente, de modo a fornecer à rede onde é aplicado garantias de Tempo-Real. No entanto, para uma arquitetura de rede ser capaz de fornecer aos seus nós garantias de qualidade serviço, é necessário que exista uma especificação do fluxo, um correto encaminhamento de tráfego, reserva de recursos, controlo de admissão e um escalonamento de pacotes. Infelizmente, o Switch HaRTES apesar de possuir todas estas características, não suporta protocolos standards. Neste documento é apresentado então o trabalho que foi desenvolvido para a integração do protocolo SRP no Switch HaRTES.
Resumo:
International audience
Resumo:
The U.S. railroad companies spend billions of dollars every year on railroad track maintenance in order to ensure safety and operational efficiency of their railroad networks. Besides maintenance costs, other costs such as train accident costs, train and shipment delay costs and rolling stock maintenance costs are also closely related to track maintenance activities. Optimizing the track maintenance process on the extensive railroad networks is a very complex problem with major cost implications. Currently, the decision making process for track maintenance planning is largely manual and primarily relies on the knowledge and judgment of experts. There is considerable potential to improve the process by using operations research techniques to develop solutions to the optimization problems on track maintenance. In this dissertation study, we propose a range of mathematical models and solution algorithms for three network-level scheduling problems on track maintenance: track inspection scheduling problem (TISP), production team scheduling problem (PTSP) and job-to-project clustering problem (JTPCP). TISP involves a set of inspection teams which travel over the railroad network to identify track defects. It is a large-scale routing and scheduling problem where thousands of tasks are to be scheduled subject to many difficult side constraints such as periodicity constraints and discrete working time constraints. A vehicle routing problem formulation was proposed for TISP, and a customized heuristic algorithm was developed to solve the model. The algorithm iteratively applies a constructive heuristic and a local search algorithm in an incremental scheduling horizon framework. The proposed model and algorithm have been adopted by a Class I railroad in its decision making process. Real-world case studies show the proposed approach outperforms the manual approach in short-term scheduling and can be used to conduct long-term what-if analyses to yield managerial insights. PTSP schedules capital track maintenance projects, which are the largest track maintenance activities and account for the majority of railroad capital spending. A time-space network model was proposed to formulate PTSP. More than ten types of side constraints were considered in the model, including very complex constraints such as mutual exclusion constraints and consecution constraints. A multiple neighborhood search algorithm, including a decomposition and restriction search and a block-interchange search, was developed to solve the model. Various performance enhancement techniques, such as data reduction, augmented cost function and subproblem prioritization, were developed to improve the algorithm. The proposed approach has been adopted by a Class I railroad for two years. Our numerical results show the model solutions are able to satisfy all hard constraints and most soft constraints. Compared with the existing manual procedure, the proposed approach is able to bring significant cost savings and operational efficiency improvement. JTPCP is an intermediate problem between TISP and PTSP. It focuses on clustering thousands of capital track maintenance jobs (based on the defects identified in track inspection) into projects so that the projects can be scheduled in PTSP. A vehicle routing problem based model and a multiple-step heuristic algorithm were developed to solve this problem. Various side constraints such as mutual exclusion constraints and rounding constraints were considered. The proposed approach has been applied in practice and has shown good performance in both solution quality and efficiency.
Resumo:
Many existing encrypted Internet protocols leak information through packet sizes and timing. Though seemingly innocuous, prior work has shown that such leakage can be used to recover part or all of the plaintext being encrypted. The prevalence of encrypted protocols as the underpinning of such critical services as e-commerce, remote login, and anonymity networks and the increasing feasibility of attacks on these services represent a considerable risk to communications security. Existing mechanisms for preventing traffic analysis focus on re-routing and padding. These prevention techniques have considerable resource and overhead requirements. Furthermore, padding is easily detectable and, in some cases, can introduce its own vulnerabilities. To address these shortcomings, we propose embedding real traffic in synthetically generated encrypted cover traffic. Novel to our approach is our use of realistic network protocol behavior models to generate cover traffic. The observable traffic we generate also has the benefit of being indistinguishable from other real encrypted traffic further thwarting an adversary's ability to target attacks. In this dissertation, we introduce the design of a proxy system called TrafficMimic that implements realistic cover traffic tunneling and can be used alone or integrated with the Tor anonymity system. We describe the cover traffic generation process including the subtleties of implementing a secure traffic generator. We show that TrafficMimic cover traffic can fool a complex protocol classification attack with 91% of the accuracy of real traffic. TrafficMimic cover traffic is also not detected by a binary classification attack specifically designed to detect TrafficMimic. We evaluate the performance of tunneling with independent cover traffic models and find that they are comparable, and, in some cases, more efficient than generic constant-rate defenses. We then use simulation and analytic modeling to understand the performance of cover traffic tunneling more deeply. We find that we can take measurements from real or simulated traffic with no tunneling and use them to estimate parameters for an accurate analytic model of the performance impact of cover traffic tunneling. Once validated, we use this model to better understand how delay, bandwidth, tunnel slowdown, and stability affect cover traffic tunneling. Finally, we take the insights from our simulation study and develop several biasing techniques that we can use to match the cover traffic to the real traffic while simultaneously bounding external information leakage. We study these bias methods using simulation and evaluate their security using a Bayesian inference attack. We find that we can safely improve performance with biasing while preventing both traffic analysis and defense detection attacks. We then apply these biasing methods to the real TrafficMimic implementation and evaluate it on the Internet. We find that biasing can provide 3-5x improvement in bandwidth for bulk transfers and 2.5-9.5x speedup for Web browsing over tunneling without biasing.
Resumo:
Understanding the natural evolution of a river–delta–sea system is important to develop a strong scientific basis for efficient integrated management plans. The distribution of sediment fluxes is linked with the natural connection between sediment source areas situated in uplifting mountain chains and deposition in plains, deltas and, ultimately, in the capturing oceans and seas. The Danube River–western Black Sea is one of the most active European systems in terms of sediment re-distribution that poses significant societal challenges. We aim to derive the tectonic and sedimentological background of human-induced changes in this system and discuss their interplay. This is obtained by analysing the tectonic and associated vertical movements, the evolution of relevant basins and the key events affecting sediment routing and deposition. The analysis of the main source and sink areas is focused in particular on the Miocene evolution of the Carpatho-Balkanides, Dinarides and their sedimentary basins including the western Black Sea. The vertical movements of mountains chains created the main moments of basin connectivity observed in the Danube system. Their timing and effects are observed in sediments deposited in the vicinity of gateways, such as the transition between the Pannonian/Transylvanian and Dacian basins and between the Dacian Basin and western Black Sea. The results demonstrate the importance of understanding threshold conditions driving rapid basins connectivity changes superposed over the longer time scale of tectonic-induced vertical movements associated with background erosion and sedimentation. The spatial and temporal scale of such processes is contrastingly different and challenging. The long-term patterns interact with recent or anthropogenic induced modifications in the natural system and may result in rapid changes at threshold conditions that can be quantified and predicted. Their understanding is critical because of frequent occurrence during orogenic evolution, as commonly observed in the Mediterranean area and discussed elsewhere.
Resumo:
Chloroplast protease AtDeg2 (an ATP-independent serine endopeptidase) is cytosolically synthesized as a precursor, which is imported into the chloroplast stroma and deprived of its transit peptide. Then the mature protein undergoes routing to its functional location at the stromal side of thylakoid membrane. In its linear structure AtDeg2 molecule contains the protease domain with catalytic triad (HDS) and two PDZ domains (PDZ1 and PDZ2). In vivo AtDeg2 most probably exists as a supposedly inactive haxamer, which may change its oligomeric stage to form active 12-mer, or 24-mer. AtDeg2 has recently been demonstrated to exhibit dual protease/chaperone function. This review is focused on the current awareness with regard to AtDeg2 structure and functional significance.
Resumo:
In this research work, a new routing protocol for Opportunistic Networks is presented. The proposed protocol is called PSONET (PSO for Opportunistic Networks) since the proposal uses a hybrid system composed of a Particle Swarm Optimization algorithm (PSO). The main motivation for using the PSO is to take advantage of its search based on individuals and their learning adaptation. The PSONET uses the Particle Swarm Optimization technique to drive the network traffic through of a good subset of forwarders messages. The PSONET analyzes network communication conditions, detecting whether each node has sparse or dense connections and thus make better decisions about routing messages. The PSONET protocol is compared with the Epidemic and PROPHET protocols in three different scenarios of mobility: a mobility model based in activities, which simulates the everyday life of people in their work activities, leisure and rest; a mobility model based on a community of people, which simulates a group of people in their communities, which eventually will contact other people who may or may not be part of your community, to exchange information; and a random mobility pattern, which simulates a scenario divided into communities where people choose a destination at random, and based on the restriction map, move to this destination using the shortest path. The simulation results, obtained through The ONE simulator, show that in scenarios where the mobility model based on a community of people and also where the mobility model is random, the PSONET protocol achieves a higher messages delivery rate and a lower replication messages compared with the Epidemic and PROPHET protocols.
Resumo:
A produção mundial de arroz chega a 80 milhões de toneladas ao ano, considerando que as cascas representam 20% deste valor, anualmente são geradas cerca de 1.162.000 toneladas desse rejeito. Há alguns anos, esse material era descartado no ambiente, atualmente as leis de proteção ambiental, demandaram na preocupação com resíduos de casca de arroz (FOLETO, 2005). Segundo Mayer (2006) a casca leva aproximadamente 5 anos para se decompor e exala um volume elevado de metano, um dos gases responsáveis pelo efeito estufa. Visando proteger a integridade do meio ambiente, estão sendo buscadas alternativas para reduzir os impactos ambientais do descarte e recuperar os investimentos na cultura do grão. Devido seu alto teor de silício a casca de arroz, é matéria-prima de grande interesse para aplicação em vários ramos: indústria eletrônica, cerâmica e na agricultura e também pode ser utilizada como fonte energética e ser aplicadas como adsorvente, em análises químicas (FOLETO, 2005; ROSA, 2009). Neste trabalho o objetivo foi padronizar método para a determinação das aflatoxinas B1, B2, G1, G2 e ocratoxiana A em cebola, empregando a técnica de extração, Dispersão da Matriz em Fase Sólida (MSPD), tendo a casca de arroz como adsorvente, de forma a possibilitar a determinação dos contaminantes empregando cromatografia de camada delgada de alta eficiência (HPTLC) e/ou cromatografia líquida de alta eficiência acoplada a detector de fluorescência (HPLC-FD). A cebola (Allium cepa L.) foi a matriz escolhida devido sua importância econômica, ela é a terceira hortaliça mais importante economicamente no Brasil, depois do tomate e da batata. O país está entre os dez maiores produtores do mundo, sendo que na safra de 2010 a produção foi de 1.548.146 toneladas. Dentre os estados que se destacam pela sua produção estão: Santa Catarina, São Paulo e Rio Grande do Sul. No entanto, a cebola assim como os demais alimentos é suscetível à contaminação fúngica e se entre a microbiota estiverem espécies toxigênicas pode ocorrer à produção de micotoxinas. O método foi validado avaliando-se curva analítica, linearidade, limites de detecção e quantificação, precisão (repetitividade e precisão intermediária) e exatidão (recuperação) para cada tipo de determinação cromatográfica. Para HPTLC, os limites de detecção variaram entre 0,33-5µg Kg-1 e os de quantificação entre 1-15µg Kg-1 Para o método HPLC-FD os limites de detecção variaram entre 0,003– 0,26 µg Kg-1 e os de quantificação 0,03 – 2,6 µg Kg-1 . As recuperações para o método HPTLC variaram entre 76- 95% e para HPLC-FD variaram entre 72-88%. O método desenvolvido foi aplicado para verificar a ocorrência de micotoxinas em 14 amostras de cebola. A contaminação com aflatoxinas foi verificada em 43% das amostras analisadas. O nível máximo encontrado foi de 90 µg Kg-1 para aflatoxina B2 em uma amostra de cebola crioula, com o defeito de mancha negra.