941 resultados para test case optimization
Resumo:
Extra Ovarian Primary Peritoneal Carcinoma (EOPPC) is a rare type of adenocarcinoma of the pelvic and abdominal peritoneum. The objective examination and the histological aspect of the neoplasia virtually overlaps with that of ovarian carcinoma. The reported case is that of a 72 year-old patient who had undergone a total hysterectomy with bilateral annessiectomy surgery 20 years earlier subsequently to a diagnosis for uterine leiomyomatosis. The patient came to our attention presenting recurring abdominal pain, constipation, weight loss, severe asthenia and fever. Her blood test results showed hypochromic microcytic anemia and a remarkable increase CA125 marker levels. Instrumental diagnostics with Ultrasound (US) and CT scans indicated the presence of a single peritoneal mass (10-12 cm diameter) close to the great epiploon. The patient was operated through a midline abdominal incision and the mass was removed with the great omentum. No primary tumor was found anywhere else in the abdomen and in the pelvis. The operation lasted approximately 50 minutes. The post-operative course was normal and the patient was discharged four days later. The histological exam of the neoplasia, supported by immunohistochemical analysis, showed a significant positivity for CA 125, vimentin and cytocheratin, presence of psammoma bodies, and cytoarchitectural pattern resembling that of a serous ovarian carcinoma even in absence of primitiveness, leading to a final diagnosis of EOPPC. The patient later underwent six cycles of chemotherapy with paclitaxel (135 mg/m2/24 hr) in association with cisplatin (75mg/m2). At the fourth year follow-up no sign of relapse was observed. .
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
Dissertação submetida à Universidade de Lisboa, Faculdade de Ciências para a obtenção do Grau de Mestre em Microbiologia Aplicada.
Resumo:
Introduction: Great interest is raising in food intolerances due to the lack, in many cases, of a particular sensitizing agent. Objective: We investigated the serum level of possible new haptens in 15 heavy meat consumers for sport fitness affected by various kinds of food intolerance and who had ever been administered antibiotics in their life for clinical problems. Methods: Forty ml of blood were drawn from each patient and analyzed, by means of an ELISA test, in order to possibly identify the presence of an undue contaminant with hapten properties. Results: Four out of fifteen subjects (26%) showed a serum oxytetracycline amount > 6 ng/g (which is considered the safety limit), 10 of 15 (66%) a serum doxycycline amount > of 6 ng/g and 3 out of 15 (30%) subjects had high serum level of both molecules. Conclusions: Although a direct ratio between body antibiotics remnant storage in the long run and chronic gut dysfunctions and/or food allergy did not reached the evidence yet, the blood traces of these compounds in a food intolerant otherwise healthy population might be considered the preliminary putative step of a sensitizing pathway. Our next goals foresee a deeper insight into the sensitizing trigger from human chronic antibiotic exposure via the zootechnical delivery of poultry food.
Resumo:
International audience
Resumo:
This dissertation investigates the connection between spectral analysis and frame theory. When considering the spectral properties of a frame, we present a few novel results relating to the spectral decomposition. We first show that scalable frames have the property that the inner product of the scaling coefficients and the eigenvectors must equal the inverse eigenvalues. From this, we prove a similar result when an approximate scaling is obtained. We then focus on the optimization problems inherent to the scalable frames by first showing that there is an equivalence between scaling a frame and optimization problems with a non-restrictive objective function. Various objective functions are considered, and an analysis of the solution type is presented. For linear objectives, we can encourage sparse scalings, and with barrier objective functions, we force dense solutions. We further consider frames in high dimensions, and derive various solution techniques. From here, we restrict ourselves to various frame classes, to add more specificity to the results. Using frames generated from distributions allows for the placement of probabilistic bounds on scalability. For discrete distributions (Bernoulli and Rademacher), we bound the probability of encountering an ONB, and for continuous symmetric distributions (Uniform and Gaussian), we show that symmetry is retained in the transformed domain. We also prove several hyperplane-separation results. With the theory developed, we discuss graph applications of the scalability framework. We make a connection with graph conditioning, and show the in-feasibility of the problem in the general case. After a modification, we show that any complete graph can be conditioned. We then present a modification of standard PCA (robust PCA) developed by Cand\`es, and give some background into Electron Energy-Loss Spectroscopy (EELS). We design a novel scheme for the processing of EELS through robust PCA and least-squares regression, and test this scheme on biological samples. Finally, we take the idea of robust PCA and apply the technique of kernel PCA to perform robust manifold learning. We derive the problem and present an algorithm for its solution. There is also discussion of the differences with RPCA that make theoretical guarantees difficult.
Resumo:
Cache-coherent non uniform memory access (ccNUMA) architecture is a standard design pattern for contemporary multicore processors, and future generations of architectures are likely to be NUMA. NUMA architectures create new challenges for managed runtime systems. Memory-intensive applications use the system’s distributed memory banks to allocate data, and the automatic memory manager collects garbage left in these memory banks. The garbage collector may need to access remote memory banks, which entails access latency overhead and potential bandwidth saturation for the interconnection between memory banks. This dissertation makes five significant contributions to garbage collection on NUMA systems, with a case study implementation using the Hotspot Java Virtual Machine. It empirically studies data locality for a Stop-The-World garbage collector when tracing connected objects in NUMA heaps. First, it identifies a locality richness which exists naturally in connected objects that contain a root object and its reachable set— ‘rooted sub-graphs’. Second, this dissertation leverages the locality characteristic of rooted sub-graphs to develop a new NUMA-aware garbage collection mechanism. A garbage collector thread processes a local root and its reachable set, which is likely to have a large number of objects in the same NUMA node. Third, a garbage collector thread steals references from sibling threads that run on the same NUMA node to improve data locality. This research evaluates the new NUMA-aware garbage collector using seven benchmarks of an established real-world DaCapo benchmark suite. In addition, evaluation involves a widely used SPECjbb benchmark and Neo4J graph database Java benchmark, as well as an artificial benchmark. The results of the NUMA-aware garbage collector on a multi-hop NUMA architecture show an average of 15% performance improvement. Furthermore, this performance gain is shown to be as a result of an improved NUMA memory access in a ccNUMA system. Fourth, the existing Hotspot JVM adaptive policy for configuring the number of garbage collection threads is shown to be suboptimal for current NUMA machines. The policy uses outdated assumptions and it generates a constant thread count. In fact, the Hotspot JVM still uses this policy in the production version. This research shows that the optimal number of garbage collection threads is application-specific and configuring the optimal number of garbage collection threads yields better collection throughput than the default policy. Fifth, this dissertation designs and implements a runtime technique, which involves heuristics from dynamic collection behavior to calculate an optimal number of garbage collector threads for each collection cycle. The results show an average of 21% improvements to the garbage collection performance for DaCapo benchmarks.
Resumo:
Chromosome microarray analysis is a powerful diagnostic tool and is being used as a first-line approach to detect chromosome imbalances associated with intellectual disability, dysmorphic features and congenital abnormalities. This test enables the identification of new copy number variants (CNVs) and their association with new microdeletion/microduplication syndromes in patients previously without diagnosis. We report the case of a 7 year-old female with moderate intellectual disability, severe speech delay and auto and hetero aggressivity with a previous 45,XX,der(13;14)mat karyotype performed at a younger age. Affymetrix CytoScan 750K chromosome microarray analysis was performed detecting a 1.77 Mb deletion at 3p26.3, encompassing 2 OMIM genes, CNTN6 and CNTN4. These genes play an important role in the formation, maintenance, and plasticity of functional neuronal networks. Deletions or mutations in CNTN4 gene have been implicated in intellectual disability and learning disabilities. Disruptions or deletions in the CNTN6 gene have been associated with development delay and other neurodevelopmental disorders. The haploinsufficiency of these genes has been suggested to participate to the typical clinical features of 3p deletion syndrome. Nevertheless inheritance from a healthy parent has been reported, suggesting incomplete penetrance and variable phenotype for this CNV. We compare our patient with other similar reported cases, adding additional value to the phenotype-genotype correlation of deletions in this region.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
O fogo é um processo frequente nas paisagens do norte de Portugal. Estudos anteriores mostraram que os bosques de azinheira (Quercus rotundifolia) persistem após a passagem do fogo e ajudam a diminuir a sua intensidade e taxa de propagação. Os principais objetivos deste estudo foram compreender e modelar o efeito dos bosques de azinheira no comportamento do fogo ao nível da paisagem da bacia superior do rio Sabor, localizado no nordeste de Portugal. O impacto dos bosques de azinheira no comportamento do fogo foi testado em termos de área e configuração de acordo com cenários que simulam a possível distribuição destas unidades de vegetação na paisagem, considerando uma percentagem de ocupação da azinheira de 2.2% (Low), 18.1% (Moderate), 26.0% (High), e 39.8% (Rivers). Estes cenários tiveram como principal objetivo testar 1) o papel dos bosques de azinheira no comportamento do fogo e 2) de que forma a configuração das manchas de azinheira podem ajudar a diminuir a intensidade da linha de fogo e área ardida. Na modelação do comportamento do fogo foi usado o modelo FlamMap para simular a intensidade de linha do fogo e taxa de propagação do fogo com base em modelos de combustível associados a cada ocupação e uso do solo presente na área de estudo, e também com base em fatores topográficos (altitude, declive e orientação da encosta) e climáticos (humidade e velocidade do vento). Foram ainda usados dois modelos de combustível para a ocupação de azinheira (áreas interiores e de bordadura), desenvolvidos com base em dados reais obtidos na região. Usou-se o software FRAGSATS para a análise dos padrões espaciais das classes de intensidade de linha do fogo, usando-se as métricas Class Area (CA), Number of Patches (NP) e Large Patches Index (LPI). Os resultados obtidos indicaram que a intensidade da linha de fogo e a taxa de propagação do fogo variou entre cenários e entre modelos de combustível para o azinhal. A intensidade média da linha de fogo e a taxa média de propagação do fogo decresceu à medida que a percentagem de área de bosques de azinheira aumentou na paisagem. Também foi observado que as métricas CA, NP e LPI variaram entre cenários e modelos de combustível para o azinhal, decrescendo quando a percentagem de área de bosques de azinheira aumentou. Este estudo permitiu concluir que a variação da percentagem de ocupação e configuração espacial dos bosques de azinheira influenciam o comportamento do fogo, reduzindo, em termos médios, a intensidade da linha de fogo e a taxa de propagação, sugerindo que os bosques de azinhal podem ser usados como medidas silvícolas preventivas para diminuir o risco de incêndio nesta região.
Resumo:
The flow rates of drying and nebulizing gas, heat block and desolvation line temperatures and interface voltage are potential electrospray ionization parameters as they may enhance sensitivity of the mass spectrometer. The conditions that give higher sensitivity of 13 pharmaceuticals were explored. First, Plackett-Burman design was implemented to screen significant factors, and it was concluded that interface voltage and nebulizing gas flow were the only factors that influence the intensity signal for all pharmaceuticals. This fractionated factorial design was projected to set a full 2(2) factorial design with center points. The lack-of-fit test proved to be significant. Then, a central composite face-centered design was conducted. Finally, a stepwise multiple linear regression and subsequently an optimization problem solving were carried out. Two main drug clusters were found concerning the signal intensities of all runs of the augmented factorial design. p-Aminophenol, salicylic acid, and nimesulide constitute one cluster as a result of showing much higher sensitivity than the remaining drugs. The other cluster is more homogeneous with some sub-clusters comprising one pharmaceutical and its respective metabolite. It was observed that instrumental signal increased when both significant factors increased with maximum signal occurring when both codified factors are set at level +1. It was also found that, for most of the pharmaceuticals, interface voltage influences the intensity of the instrument more than the nebulizing gas flowrate. The only exceptions refer to nimesulide where the relative importance of the factors is reversed and still salicylic acid where both factors equally influence the instrumental signal. Graphical Abstract ᅟ.
Resumo:
Today , Providing drinking water and process water is one of the major problems in most countries ; the surface water often need to be treated to achieve necessary quality, and in this way, technological and also financial difficulties cause great restrictions in operating the treatment units. Although water supply by simple and cheap systems has been one of the important objectives in most scientific and research centers in the world, still a great percent of population in developing countries, especially in rural areas, don't benefit well quality water. One of the big and available sources for providing acceptable water is sea water. There are two ways to treat sea water first evaporation and second reverse osmosis system. Nowadays R.O system has been used for desalination because of low budget price and easily to operate and maintenance. The sea water should be pretreated before R.O plants, because there is some difficulties in raw sea water that can decrease yield point of membranes in R.O system. The subject of this research may be useful in this way, and we hope to be able to achieve complete success in design and construction of useful pretreatment systems for R.O plant. One of the most important units in the sea water pretreatment plant is filtration, the conventional method for filtration is pressurized sand filters, and the subject of this research is about new filtration which is called continuous back wash sand filtration (CBWSF). The CBWSF designed and tested in this research may be used more economically with less difficulty. It consists two main parts first shell body and second central part comprising of airlift pump, raw water feeding pipe, air supply hose, backwash chamber and sand washer as well as inlet and outlet connections. The CBWSF is a continuously operating filter, i.e. the filter does not have to be taken out of operation for backwashing or cleaning. Inlet water is fed through the sand bed while the sand bed is moving downwards. The water gets filtered while the sand becomes dirty. Simultaneously, the dirty sand is cleaned in the sand washer and the suspended solids are discharged in backwash water. We analyze the behavior of CBWSF in pretreatment of sea water instead of pressurized sand filter. There is one important factor which is not suitable for R.O membranes, it is bio-fouling. This factor is defined by Silt Density Index (SDI).measured by SDI. In this research has been focused on decreasing of SDI and NTU. Based on this goal, the prototype of pretreatment had been designed and manufactured to test. The system design was done mainly by using the design fundamentals of CBWSF. The automatic backwash sand filter can be used in small and also big water supply schemes. In big water treatment plants, the units of filters perform the filtration and backwash stages separately, and in small treatment plants, the unit is usually compacted to achieve less energy consumption. The analysis of the system showed that it may be used feasibly for water treating, especially for limited population. The construction is rapid, simple and economic, and its performance is high enough because no mobile mechanical part is used in it, so it may be proposed as an effective method to improve the water quality and consequently the hygiene level in the remote places of the country.
Resumo:
There is scientific evidence demonstrating the benefits of mushrooms ingestion due to their richness in bioactive compounds such as mycosterols, in particular ergosterol [I]. Agaricus bisporus L. is the most consumed mushroom worldwide presenting 90% of ergosterol in its sterol fraction [2]. Thus, it is an interesting matrix to obtain ergosterol, a molecule with a high commercial value. According to literature, ergosterol concentration can vary between 3 to 9 mg per g of dried mushroom. Nowadays, traditional methods such as maceration and Soxhlet extraction are being replaced by emerging methodologies such as ultrasound (UAE) and microwave assisted extraction (MAE) in order to decrease the used solvent amount, extraction time and, of course, increasing the extraction yield [2]. In the present work, A. bisporus was extracted varying several parameters relevant to UAE and MAE: UAE: solvent type (hexane and ethanol), ultrasound amplitude (50 - 100 %) and sonication time (5 min-15 min); MAE: solvent was fixed as ethanol, time (0-20 min), temperature (60-210 •c) and solid-liquid ratio (1-20 g!L). Moreover, in order to decrease the process complexity, the pertinence to apply a saponification step was evaluated. Response surface methodology was applied to generate mathematical models which allow maximizing and optimizing the response variables that influence the extraction of ergosterol. Concerning the UAE, ethanol proved to be the best solvent to achieve higher levels of ergosterol (671.5 ± 0.5 mg/100 g dw, at 75% amplitude for 15 min), once hexane was only able to extract 152.2 ± 0.2 mg/100 g dw, in the same conditions. Nevertheless, the hexane extract showed higher purity (11%) when compared with the ethanol counterpart ( 4% ). Furthermore, in the case of the ethanolic extract, the saponification step increased its purity to 21%, while for the hexane extract the purity was similar; in fact, hexane presents higher selectivity for the lipophilic compounds comparatively with ethanol. Regarding the MAE technique, the results showed that the optimal conditions (19 ± 3 min, 133 ± 12 •c and 1.6 ± 0.5 g!L) allowed higher ergosterol extraction levels (556 ± 26 mg/100 g dw). The values obtained with MAE are close to the ones obtained with conventional Soxhlet extraction (676 ± 3 mg/100 g dw) and UAE. Overall, UAE and MAE proved to he efficient technologies to maximize ergosterol extraction yields.
Resumo:
Plants frequently suffer contaminations by toxigenic fungi, and their mycotoxins can be produced throughout growth, harvest, drying and storage periods. The objective of this work was to validate a method for detection of toxins in medicinal and aromatic plants, through a fast and highly sensitive method, optimizing the joint co-extraction of aflatoxins (AF: AFB1, AFB2, AFG1 and AFG2) and ochratoxin A (OTA) by using Aloysia citrodora P. (lemon verbena) as a case study. For optimization purposes, samples were spiked (n=3) with standard solutions of a mix of the four AFs and OTA at 10 ng/g for AFB1, AFG1 and OTA, and at 6 ng/g of AFB2 and AFG2. Several extraction procedures were tested: i) ultrasound-assisted extraction in sodium chloride and methanol/water (80:20, v/v) [(OTA+AFs)1]; ii) maceration in methanol/1% NaHCO3 (70:30, v/v) [(OTA+AFs)2]; iii) maceration in methanol/1% NaHCO3 (70:30, v/v) (OTA1); and iv) maceration in sodium chloride and methanol/water (80:20, v/v) (AF1). AF and OTA were purified using the mycotoxin-specific immunoaffinity columns AflaTest WB and OchraTest WB (VICAM), respectively. Separation was performed with a Merck Chromolith Performance C18 column (100 x 4.6 mm) by reverse-phase HPLC coupled to a fluorescence detector (FLD) and a photochemical derivatization system (for AF). The recoveries obtained from the spiked samples showed that the single-extraction methods (OTA1 and AF1) performed better than co-extraction methods. For in-house validation of the selected methods OTA1 and AF1, recovery and precision were determined (n=6). The recovery of OTA for method OTA1 was 81%, and intermediate precision (RSDint) was 1.1%. The recoveries of AFB1, AFB2, AFG1 and AFG2 ranged from 64% to 110% for method AF1, with RSDint lower than 5%. Methods OTA1 and AF1 showed precision and recoveries within the legislated values and were found to be suitable for the extraction of OTA and AF for the matrix under study.
Resumo:
Miles e Snow’s configurational theory has received a great deal of attention from many investigators. Framing the Miles e Snow Typology with the organizational configuration concept, the main purpose of this paper is to make an empirical evaluation of what configurational theories postulate: higher organizational performance is associated to the resemblance to one of the ideal types defined. However, as it is often assumed that an organization can increase performance by selecting the adjustable hybrid type to its own exogenous environment, the relation between the organization’s effectiveness and the hybrid configuration alignment to the respective specific environment types was also analyzed. The assumption of equifinality was also considered because the configurational theory assumes that all the ideal types can potentially achieve the same performance level. A multiple regression model was made to confirm if the misfit related to the ideal and hybrid types has significant impact on the organizational effectiveness. The analysis of variance and the Kruskal-Wallis test were used to verify the equality of performance between the different organization types. In short, the empirical results obtained confirm what is postulated in the theory.