922 resultados para convergence of numerical methods
Resumo:
Milk and egg matrixes were assayed for aflatoxin M1 (AFM1) and B1 (AFB1) respectively, by AOAC official and modified methods with detection and quantification by thin layer chromatography (TLC) and high performance thin layer chromatography (HPTLC). The modified methods: Blanc followed by Romer, showed to be most appropriate for AFM1 analysis in milk. Both methods reduced emulsion formation, produced cleaner extracts, no streaking spots, precision and accuracy improved, especially when quantification was performed by HPTLC. The use of ternary mixture in the Blanc Method was advantageous as the solvent could extract AFM1 directly from the first stage (extraction), leaving other compounds in the binary mixture layer, avoiding emulsion formation, thus reducing toxin loss. The relative standard deviation (RSD%) values were low, 16 and 7% when TLC and HPTLC were used, with a mean recovery of 94 and 97%, respectively. As far as egg matrix and final extract are concerned, both methods evaluated for AFB1 need further studies. Although that matrix leads to emulsion with consequent loss of toxin, the Romer modified presented a reasonable clean extract (mean recovery of 92 and 96% for TLC and HPTLC, respectively). Most of the methods studied did not performed as expected mainly due to the matrixes high content of triglicerides (rich on saturated fatty acids), cholesterol, carotene and proteins. Although nowadays most methodology for AFM1 is based on HPLC, TLC determination (Blanc and Romer modified) for AFM1 and AFB1 is particularly recommended to those, inexperienced in food and feed mycotoxins analysis and especially who cannot afford to purchase sophisticated (HPLC,HPTLC) instrumentation.
Resumo:
Phenomena in cyber domain, especially threats to security and privacy, have proven an increasingly heated topic addressed by different writers and scholars at an increasing pace – both nationally and internationally. However little public research has been done on the subject of cyber intelligence. The main research question of the thesis was: To what extent is the applicability of cyber intelligence acquisition methods circumstantial? The study was conducted in sequential a manner, starting with defining the concept of intelligence in cyber domain and identifying its key attributes, followed by identifying the range of intelligence methods in cyber domain, criteria influencing their applicability, and types of operatives utilizing cyber intelligence. The methods and criteria were refined into a hierarchical model. The existing conceptions of cyber intelligence were mapped through an extensive literature study on a wide variety of sources. The established understanding was further developed through 15 semi-structured interviews with experts of different backgrounds, whose wide range of points of view proved to substantially enhance the perspective on the subject. Four of the interviewed experts participated in a relatively extensive survey based on the constructed hierarchical model on cyber intelligence that was formulated in to an AHP hierarchy and executed in the Expert Choice Comparion online application. It was concluded that Intelligence in cyber domain is an endorsing, cross-cutting intelligence discipline that adds value to all aspects of conventional intelligence and furthermore that it bears a substantial amount of characteristic traits – both advantageous and disadvantageous – and furthermore that the applicability of cyber intelligence methods is partly circumstantially limited.
Resumo:
Most soybean pathogens are seed transmitted, deserving emphasis the fungus Sclerotinia sclerotiorum, which has been presenting worrying levels of field incidence in some soybean cropping areas in several Brazilian states. The objective of this study was to verify the efficiency of different methods for detecting S. sclerotiorum on soybean seeds artificially infected in the laboratory and from field production areas with a historical disease incidence. Seed samples of seven different cultivars collected from naturally infested fields, and one seed sample artificially inoculated in the laboratory were used. The following detection methods recommended in the literature were compared: Blotter test at 7 ºC, 14 ºC, and 21 ºC; Rolled Paper; and Neon-S. Results demonstrated that these methods showed no repeatability and had a low sensitivity for detecting the pathogen in seeds from areas with disease incidence. They were effective, however, for its detection on artificially inoculated seeds. In the Blotter test method at 7 ºC, there was a lower incidence of other fungi considered undesirable during seed analysis.
Resumo:
Intelligence from a human source, that is falsely thought to be true, is potentially more harmful than a total lack of it. The veracity assessment of the gathered intelligence is one of the most important phases of the intelligence process. Lie detection and veracity assessment methods have been studied widely but a comprehensive analysis of these methods’ applicability is lacking. There are some problems related to the efficacy of lie detection and veracity assessment. According to a conventional belief an almighty lie detection method, that is almost 100% accurate and suitable for any social encounter, exists. However, scientific studies have shown that this is not the case, and popular approaches are often over simplified. The main research question of this study was: What is the applicability of veracity assessment methods, which are reliable and are based on scientific proof, in terms of the following criteria? o Accuracy, i.e. probability of detecting deception successfully o Ease of Use, i.e. easiness to apply the method correctly o Time Required to apply the method reliably o No Need for Special Equipment o Unobtrusiveness of the method In order to get an answer to the main research question, the following supporting research questions were answered first: What kinds of interviewing and interrogation techniques exist and how could they be used in the intelligence interview context, what kinds of lie detection and veracity assessment methods exist that are reliable and are based on scientific proof and what kind of uncertainty and other limitations are included in these methods? Two major databases, Google Scholar and Science Direct, were used to search and collect existing topic related studies and other papers. After the search phase, the understanding of the existing lie detection and veracity assessment methods was established through a meta-analysis. Multi Criteria Analysis utilizing Analytic Hierarchy Process was conducted to compare scientifically valid lie detection and veracity assessment methods in terms of the assessment criteria. In addition, a field study was arranged to get a firsthand experience of the applicability of different lie detection and veracity assessment methods. The Studied Features of Discourse and the Studied Features of Nonverbal Communication gained the highest ranking in overall applicability. They were assessed to be the easiest and fastest to apply, and to have required temporal and contextual sensitivity. The Plausibility and Inner Logic of the Statement, the Method for Assessing the Credibility of Evidence and the Criteria Based Content Analysis were also found to be useful, but with some limitations. The Discourse Analysis and the Polygraph were assessed to be the least applicable. Results from the field study support these findings. However, it was also discovered that the most applicable methods are not entirely troublefree either. In addition, this study highlighted that three channels of information, Content, Discourse and Nonverbal Communication, can be subjected to veracity assessment methods that are scientifically defensible. There is at least one reliable and applicable veracity assessment method for each of the three channels. All of the methods require disciplined application and a scientific working approach. There are no quick gains if high accuracy and reliability is desired. Since most of the current lie detection studies are concentrated around a scenario, where roughly half of the assessed people are totally truthful and the other half are liars who present a well prepared cover story, it is proposed that in future studies lie detection and veracity assessment methods are tested against partially truthful human sources. This kind of test setup would highlight new challenges and opportunities for the use of existing and widely studied lie detection methods, as well as for the modern ones that are still under development.
Resumo:
There are a considerable number of programs and agencies that count on the existence of a unique relationship between nature and human development. In addition, there are significant bodies of literature dedicated to understanding developmentally focused nature-based experiences. This research project was designed to flirther the understanding of this phenomenon. Consequently, the purpose of this research endeavour was to discover the essence ofthe intersection ofpersonal transformation and nature-based leisure, culminating in a rich and detailed account of this otherwise tacit phenomenon. As such, this research built on the assumption of this beneficial intersection of nature and personal transformation and contributes to the understanding ofhow this context is supporting or generating of selfactualization and positive development. Heuristic methods were employed because heuristics is concerned with the quality and essence of an experience, not causal relationships (Moustakas, 1990). Heuristic inquiry begins with the primary researcher and her personal experience and knowledge of the phenomenon. This study also involved four other coresearchers who had also experienced this phenomenon intensely. Co-researchers were found through purposeful and snowball sampling. Rich narrative descriptions of their experiences were gathered through in-depth, semi-structured interviews, and artifact elicitation was employed as a means to get at co-researchers' tacit knowledge. Each coresearcher was interviewed twice (the first interview focused on personal transformation, the second on nature) for approximately four and a half hours in total. Transcripts were read repeatedly to discern patterns that emerged from the study of the narratives and were coded accordingly. Individual narratives were consolidated to create a composite narrative of the experience. Finally, a creative synthesis was developed to represent the essence of this tacit experience. In conclusion the essence of the intersection of nature-based leisure and personal transformation was found to lie in the convergence of the lived experience of authenticity. The physical environment of nature was perceived and experienced to be a space and context of authenticity, leisure experiences were experienced as an engagement of authenticity, and individuals themselves encountered a true or authentic self that emanated from within. The implications of these findings are many, offering suggestions, considerations and implications from reconsidered approaches to environmental education to support for selfdirected human development.
Resumo:
A general derivation of the anharmonic coefficients for a periodic lattice invoking the special case of the central force interaction is presented. All of the contributions to mean square displacement (MSD) to order 14 perturbation theory are enumerated. A direct correspondance is found between the high temperature limit MSD and high temperature limit free energy contributions up to and including 0(14). This correspondance follows from the detailed derivation of some of the contributions to MSD. Numerical results are obtained for all the MSD contributions to 0(14) using the Lennard-Jones potential for the lattice constants and temperatures for which the Monte Carlo results were calculated by Heiser, Shukla and Cowley. The Peierls approximation is also employed in order to simplify the numerical evaluation of the MSD contributions. The numerical results indicate the convergence of the perturbation expansion up to 75% of the melting temperature of the solid (TM) for the exact calculation; however, a better agreement with the Monte Carlo results is not obtained when the total of all 14 contributions is added to the 12 perturbation theory results. Using Peierls approximation the expansion converges up to 45% of TM• The MSD contributions arising in the Green's function method of Shukla and Hubschle are derived and enumerated up to and including 0(18). The total MSD from these selected contributions is in excellent agreement with their results at all temperatures. Theoretical values of the recoilless fraction for krypton are calculated from the MSD contributions for both the Lennard-Jones and Aziz potentials. The agreement with experimental values is quite good.
Resumo:
The following thesis provides an empirical case study in which a group of 6 first generation female Afghan Canadian youth is studied to determine their identity negotiation and development processes in everyday experiences. This process is investigated across different contexts of home, school, and the community. In terms of schooling experiences, 2 participants each are selected representing public, Islamic, and Catholic schools in Southern Ontario. This study employs feminist research methods and is analyzed through a convergence of critical race theory (critical race feminism), youth development theory, and feminist theory. Participant experiences reveal issues of racism, discrimination, and bias within schooling (public, Catholic) systems. Within these contexts, participants suppress their identities or are exposed to negative experiences based on their ethnic or religious identification. Students in Islamic schools experience support for a more positive ethnic and religious identity. Home and community provided nurturing contexts where participants are able to reaffirm and develop a positive overall identity.
Resumo:
Presently, conditions ensuring the validity of bootstrap methods for the sample mean of (possibly heterogeneous) near epoch dependent (NED) functions of mixing processes are unknown. Here we establish the validity of the bootstrap in this context, extending the applicability of bootstrap methods to a class of processes broadly relevant for applications in economics and finance. Our results apply to two block bootstrap methods: the moving blocks bootstrap of Künsch ( 989) and Liu and Singh ( 992), and the stationary bootstrap of Politis and Romano ( 994). In particular, the consistency of the bootstrap variance estimator for the sample mean is shown to be robust against heteroskedasticity and dependence of unknown form. The first order asymptotic validity of the bootstrap approximation to the actual distribution of the sample mean is also established in this heterogeneous NED context.
Resumo:
Les centres d’appels sont des éléments clés de presque n’importe quelle grande organisation. Le problème de gestion du travail a reçu beaucoup d’attention dans la littérature. Une formulation typique se base sur des mesures de performance sur un horizon infini, et le problème d’affectation d’agents est habituellement résolu en combinant des méthodes d’optimisation et de simulation. Dans cette thèse, nous considérons un problème d’affection d’agents pour des centres d’appels soumis a des contraintes en probabilité. Nous introduisons une formulation qui exige que les contraintes de qualité de service (QoS) soient satisfaites avec une forte probabilité, et définissons une approximation de ce problème par moyenne échantillonnale dans un cadre de compétences multiples. Nous établissons la convergence de la solution du problème approximatif vers celle du problème initial quand la taille de l’échantillon croit. Pour le cas particulier où tous les agents ont toutes les compétences (un seul groupe d’agents), nous concevons trois méthodes d’optimisation basées sur la simulation pour le problème de moyenne échantillonnale. Étant donné un niveau initial de personnel, nous augmentons le nombre d’agents pour les périodes où les contraintes sont violées, et nous diminuons le nombre d’agents pour les périodes telles que les contraintes soient toujours satisfaites après cette réduction. Des expériences numériques sont menées sur plusieurs modèles de centre d’appels à faible occupation, au cours desquelles les algorithmes donnent de bonnes solutions, i.e. la plupart des contraintes en probabilité sont satisfaites, et nous ne pouvons pas réduire le personnel dans une période donnée sont introduire de violation de contraintes. Un avantage de ces algorithmes, par rapport à d’autres méthodes, est la facilité d’implémentation.
Resumo:
There are many ways to generate geometrical models for numerical simulation, and most of them start with a segmentation step to extract the boundaries of the regions of interest. This paper presents an algorithm to generate a patient-specific three-dimensional geometric model, based on a tetrahedral mesh, without an initial extraction of contours from the volumetric data. Using the information directly available in the data, such as gray levels, we built a metric to drive a mesh adaptation process. The metric is used to specify the size and orientation of the tetrahedral elements everywhere in the mesh. Our method, which produces anisotropic meshes, gives good results with synthetic and real MRI data. The resulting model quality has been evaluated qualitatively and quantitatively by comparing it with an analytical solution and with a segmentation made by an expert. Results show that our method gives, in 90% of the cases, as good or better meshes as a similar isotropic method, based on the accuracy of the volume reconstruction for a given mesh size. Moreover, a comparison of the Hausdorff distances between adapted meshes of both methods and ground-truth volumes shows that our method decreases reconstruction errors faster. Copyright © 2015 John Wiley & Sons, Ltd.
Resumo:
This thesis deals with the study of light beam propagation through different nonlinear media. Analytical and numerical methods are used to show the formation of solitonS in these media. Basic experiments have also been performed to show the formation of a self-written waveguide in a photopolymer. The variational method is used for the analytical analysis throughout the thesis. Numerical method based on the finite-difference forms of the original partial differential equation is used for the numerical analysis.In Chapter 2, we have studied two kinds of solitons, the (2 + 1) D spatial solitons and the (3 + l)D spatio-temporal solitons in a cubic-quintic medium in the presence of multiphoton ionization.In Chapter 3, we have studied the evolution of light beam through a different kind of nonlinear media, the photorcfractive polymer. We study modulational instability and beam propagation through a photorefractive polymer in the presence of absorption losses. The one dimensional beam propagation through the nonlinear medium is studied using variational and numerical methods. Stable soliton propagation is observed both analytically and numerically.Chapter 4 deals with the study of modulational instability in a photorefractive crystal in the presence of wave mixing effects. Modulational instability in a photorefractive medium is studied in the presence of two wave mixing. We then propose and derive a model for forward four wave mixing in the photorefractive medium and investigate the modulational instability induced by four wave mixing effects. By using the standard linear stability analysis the instability gain is obtained.Chapter 5 deals with the study of self-written waveguides. Besides the usual analytical analysis, basic experiments were done showing the formation of self-written waveguide in a photopolymer system. The formation of a directional coupler in a photopolymer system is studied theoretically in Chapter 6. We propose and study, using the variational approximation as well as numerical simulation, the evolution of a probe beam through a directional coupler formed in a photopolymer system.
Resumo:
This thesis lays importance in the preparation and characterization of a few selected representatives of the ferrite family in the nanoregime. The candidates being manganese zinc ferrite and cobalt ferrite prepared by coprecipitation and sol-gel combustion techniques respectively. The thesis not only stresses importance on the preparation techniques and optimization of the reaction conditions, but emphasizes in investigating the various properties namely structural, magnetic and electrical. Passivated nickel nanocomposites are synthesized using polystyrene beads and adopting a novel route of ion exchange reduction. The structural and magnetic properties of these magnetic nanocomposites are correlated. The magnetocaloric effect (MCE) exhibited by these materials are also investigated with a view to finding out the potential of these materials as magnetic refrigerants. Calculations using numerical methods are employed to evaluate the entropy change on selected samples.
Resumo:
The SST convection relation over tropical ocean and its impact on the South Asian monsoon is the first part of this thesis. Understanding the complicated relation between SST and convection is important for better prediction of the variability of the Indian monsoon in subseasonal, seasonal, interannual, and longer time scales. Improved global data sets from satellite scatterometer observations of SST, precipitation and refined reanalysis of global wind fields have made it possible to do a comprehensive study of the SST convection relation. Interaction of the monsoon and Indian ocean has been discussed. A coupled feedback process between SST and the Active-Break cycle of the Asian summer monsoon is a central theme of the thesis. The relation between SST and convection is very important in the field of numerical modeling of tropical rainfall. It is well known that models generally do very well simulating rainfall in areas of tropical convergence zones but are found unable to do satisfactory simulation in the monsoon areas. Thus in this study we critically examined the different mechanisms of generation of deep convection over these two distinct regions.The study reported in chapter 3 has shown that SST - convection relation over the warm pool regions of Indian and west Pacific oceans (monsoon areas) is in such a way that convection increases with SST in the SST range 26-29 C and for SST higher than 29-30 C convection decreases with increase of SST (it is called Waliser type). It is found that convection is induced in areas with SST gradients in the warm pool areas of Indian and west Pacific oceans. Once deep convection is initiated in the south of the warmest region of warm pool, the deep tropospheric heating by the latent heat released in the convective clouds produces strong low level wind fields (Low level Jet - LLJ) on the equatorward side of the warm pool and both the convection and wind are found to grow through a positive feedback process. Thus SST through its gradient acts only as an initiator of convection. The central region of the warm pool has very small SST gradients and large values of convection are associated with the cyclonic vorticity of the LLJ in the atmospheric boundary layer. The conditionally unstable atmosphere in the tropics is favorable for the production of deep convective clouds.
Resumo:
There are basically two methods for prediction of shallow water waves, viz. the graphical method and the numerical method. The numerical methods are being widely used, now—a—days, because they are fast, accurate and are especially useful when the prediction over a large spatial frame is required. Practically little has been done on the development of numerical models for the prediction of height and spectral transformation of waves as applicable to our coasts. Synchronized deep and shallow water wave measurements which are essential for study of wave transformation are very much lacking for our coasts. Under these circumstances, a comprehensive study of the wave transformation in the shallow waters of our coast was felt very important and is undertaken in the present investigation.
Resumo:
DNA sequence representation methods are used to denote a gene structure effectively and help in similarities/dissimilarities analysis of coding sequences. Many different kinds of representations have been proposed in the literature. They can be broadly classified into Numerical, Graphical, Geometrical and Hybrid representation methods. DNA structure and function analysis are made easy with graphical and geometrical representation methods since it gives visual representation of a DNA structure. In numerical method, numerical values are assigned to a sequence and digital signal processing methods are used to analyze the sequence. Hybrid approaches are also reported in the literature to analyze DNA sequences. This paper reviews the latest developments in DNA Sequence representation methods. We also present a taxonomy of various methods. A comparison of these methods where ever possible is also done