902 resultados para Permutation Ordered Binary Number System


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The role of the binary nucleation of sulfuric acid in aerosol formation and its implications for global warming is one of the fundamental unsettled questions in atmospheric chemistry. We have investigated the thermodynamics of sulfuric acid hydration using ab initio quantum mechanical methods. For H2SO4(H2O)n where n = 1–6, we used a scheme combining molecular dynamics configurational sampling with high-level ab initio calculations to locate the global and many low lying local minima for each cluster size. For each isomer, we extrapolated the Møller–Plesset perturbation theory (MP2) energies to their complete basis set (CBS) limit and added finite temperature corrections within the rigid-rotor-harmonic-oscillator (RRHO) model using scaled harmonic vibrational frequencies. We found that ionic pair (HSO4–·H3O+)(H2O)n−1clusters are competitive with the neutral (H2SO4)(H2O)n clusters for n ≥ 3 and are more stable than neutral clusters for n ≥ 4 depending on the temperature. The Boltzmann averaged Gibbs free energies for the formation of H2SO4(H2O)n clusters are favorable in colder regions of the troposphere (T = 216.65–273.15 K) for n = 1–6, but the formation of clusters with n ≥ 5 is not favorable at higher (T > 273.15 K) temperatures. Our results suggest the critical cluster of a binary H2SO4–H2O system must contain more than one H2SO4 and are in concert with recent findings(1) that the role of binary nucleation is small at ambient conditions, but significant at colder regions of the troposphere. Overall, the results support the idea that binary nucleation of sulfuric acid and water cannot account for nucleation of sulfuric acid in the lower troposphere.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Project you are about to see it is based on the technologies used on object detection and recognition, especially on leaves and chromosomes. To do so, this document contains the typical parts of a scientific paper, as it is what it is. It is composed by an Abstract, an Introduction, points that have to do with the investigation area, future work, conclusions and references used for the elaboration of the document. The Abstract talks about what are we going to find in this paper, which is technologies employed on pattern detection and recognition for leaves and chromosomes and the jobs that are already made for cataloguing these objects. In the introduction detection and recognition meanings are explained. This is necessary as many papers get confused with these terms, specially the ones talking about chromosomes. Detecting an object is gathering the parts of the image that are useful and eliminating the useless parts. Summarizing, detection would be recognizing the objects borders. When talking about recognition, we are talking about the computers or the machines process, which says what kind of object we are handling. Afterwards we face a compilation of the most used technologies in object detection in general. There are two main groups on this category: Based on derivatives of images and based on ASIFT points. The ones that are based on derivatives of images have in common that convolving them with a previously created matrix does the treatment of them. This is done for detecting borders on the images, which are changes on the intensity of the pixels. Within these technologies we face two groups: Gradian based, which search for maximums and minimums on the pixels intensity as they only use the first derivative. The Laplacian based methods search for zeros on the pixels intensity as they use the second derivative. Depending on the level of details that we want to use on the final result, we will choose one option or the other, because, as its logic, if we used Gradian based methods, the computer will consume less resources and less time as there are less operations, but the quality will be worse. On the other hand, if we use the Laplacian based methods we will need more time and resources as they require more operations, but we will have a much better quality result. After explaining all the derivative based methods, we take a look on the different algorithms that are available for both groups. The other big group of technologies for object recognition is the one based on ASIFT points, which are based on 6 image parameters and compare them with another image taking under consideration these parameters. These methods disadvantage, for our future purposes, is that it is only valid for one single object. So if we are going to recognize two different leaves, even though if they refer to the same specie, we are not going to be able to recognize them with this method. It is important to mention these types of technologies as we are talking about recognition methods in general. At the end of the chapter we can see a comparison with pros and cons of all technologies that are employed. Firstly comparing them separately and then comparing them all together, based on our purposes. Recognition techniques, which are the next chapter, are not really vast as, even though there are general steps for doing object recognition, every single object that has to be recognized has its own method as the are different. This is why there is not a general method that we can specify on this chapter. We now move on into leaf detection techniques on computers. Now we will use the technique explained above based on the image derivatives. Next step will be to turn the leaf into several parameters. Depending on the document that you are referring to, there will be more or less parameters. Some papers recommend to divide the leaf into 3 main features (shape, dent and vein] and doing mathematical operations with them we can get up to 16 secondary features. Next proposition is dividing the leaf into 5 main features (Diameter, physiological length, physiological width, area and perimeter] and from those, extract 12 secondary features. This second alternative is the most used so it is the one that is going to be the reference. Following in to leaf recognition, we are based on a paper that provides a source code that, clicking on both leaf ends, it automatically tells to which specie belongs the leaf that we are trying to recognize. To do so, it only requires having a database. On the tests that have been made by the document, they assure us a 90.312% of accuracy over 320 total tests (32 plants on the database and 10 tests per specie]. Next chapter talks about chromosome detection, where we shall pass the metaphasis plate, where the chromosomes are disorganized, into the karyotype plate, which is the usual view of the 23 chromosomes ordered by number. There are two types of techniques to do this step: the skeletonization process and swiping angles. Skeletonization progress consists on suppressing the inside pixels of the chromosome to just stay with the silhouette. This method is really similar to the ones based on the derivatives of the image but the difference is that it doesnt detect the borders but the interior of the chromosome. Second technique consists of swiping angles from the beginning of the chromosome and, taking under consideration, that on a single chromosome we cannot have more than an X angle, it detects the various regions of the chromosomes. Once the karyotype plate is defined, we continue with chromosome recognition. To do so, there is a technique based on the banding that chromosomes have (grey scale bands] that make them unique. The program then detects the longitudinal axis of the chromosome and reconstructs the band profiles. Then the computer is able to recognize this chromosome. Concerning the future work, we generally have to independent techniques that dont reunite detection and recognition, so our main focus would be to prepare a program that gathers both techniques. On the leaf matter we have seen that, detection and recognition, have a link as both share the option of dividing the leaf into 5 main features. The work that would have to be done is to create an algorithm that linked both methods, as in the program, which recognizes leaves, it has to be clicked both leaf ends so it is not an automatic algorithm. On the chromosome side, we should create an algorithm that searches for the beginning of the chromosome and then start to swipe angles, to later give the parameters to the program that searches for the band profiles. Finally, on the summary, we explain why this type of investigation is needed, and that is because with global warming, lots of species (animals and plants] are beginning to extinguish. That is the reason why a big database, which gathers all the possible species, is needed. For recognizing animal species, we just only have to have the 23 chromosomes. While recognizing a plant, there are several ways of doing it, but the easiest way to input a computer is to scan the leaf of the plant. RESUMEN. El proyecto que se puede ver a continuación trata sobre las tecnologías empleadas en la detección y reconocimiento de objetos, especialmente de hojas y cromosomas. Para ello, este documento contiene las partes típicas de un paper de investigación, puesto que es de lo que se trata. Así, estará compuesto de Abstract, Introducción, diversos puntos que tengan que ver con el área a investigar, trabajo futuro, conclusiones y biografía utilizada para la realización del documento. Así, el Abstract nos cuenta qué vamos a poder encontrar en este paper, que no es ni más ni menos que las tecnologías empleadas en el reconocimiento y detección de patrones en hojas y cromosomas y qué trabajos hay existentes para catalogar a estos objetos. En la introducción se explican los conceptos de qué es la detección y qué es el reconocimiento. Esto es necesario ya que muchos papers científicos, especialmente los que hablan de cromosomas, confunden estos dos términos que no podían ser más sencillos. Por un lado tendríamos la detección del objeto, que sería simplemente coger las partes que nos interesasen de la imagen y eliminar aquellas partes que no nos fueran útiles para un futuro. Resumiendo, sería reconocer los bordes del objeto de estudio. Cuando hablamos de reconocimiento, estamos refiriéndonos al proceso que tiene el ordenador, o la máquina, para decir qué clase de objeto estamos tratando. Seguidamente nos encontramos con un recopilatorio de las tecnologías más utilizadas para la detección de objetos, en general. Aquí nos encontraríamos con dos grandes grupos de tecnologías: Las basadas en las derivadas de imágenes y las basadas en los puntos ASIFT. El grupo de tecnologías basadas en derivadas de imágenes tienen en común que hay que tratar a las imágenes mediante una convolución con una matriz creada previamente. Esto se hace para detectar bordes en las imágenes que son básicamente cambios en la intensidad de los píxeles. Dentro de estas tecnologías nos encontramos con dos grupos: Los basados en gradientes, los cuales buscan máximos y mínimos de intensidad en la imagen puesto que sólo utilizan la primera derivada; y los Laplacianos, los cuales buscan ceros en la intensidad de los píxeles puesto que estos utilizan la segunda derivada de la imagen. Dependiendo del nivel de detalles que queramos utilizar en el resultado final nos decantaremos por un método u otro puesto que, como es lógico, si utilizamos los basados en el gradiente habrá menos operaciones por lo que consumirá más tiempo y recursos pero por la contra tendremos menos calidad de imagen. Y al revés pasa con los Laplacianos, puesto que necesitan más operaciones y recursos pero tendrán un resultado final con mejor calidad. Después de explicar los tipos de operadores que hay, se hace un recorrido explicando los distintos tipos de algoritmos que hay en cada uno de los grupos. El otro gran grupo de tecnologías para el reconocimiento de objetos son los basados en puntos ASIFT, los cuales se basan en 6 parámetros de la imagen y la comparan con otra imagen teniendo en cuenta dichos parámetros. La desventaja de este método, para nuestros propósitos futuros, es que sólo es valido para un objeto en concreto. Por lo que si vamos a reconocer dos hojas diferentes, aunque sean de la misma especie, no vamos a poder reconocerlas mediante este método. Aún así es importante explicar este tipo de tecnologías puesto que estamos hablando de técnicas de reconocimiento en general. Al final del capítulo podremos ver una comparación con los pros y las contras de todas las tecnologías empleadas. Primeramente comparándolas de forma separada y, finalmente, compararemos todos los métodos existentes en base a nuestros propósitos. Las técnicas de reconocimiento, el siguiente apartado, no es muy extenso puesto que, aunque haya pasos generales para el reconocimiento de objetos, cada objeto a reconocer es distinto por lo que no hay un método específico que se pueda generalizar. Pasamos ahora a las técnicas de detección de hojas mediante ordenador. Aquí usaremos la técnica explicada previamente explicada basada en las derivadas de las imágenes. La continuación de este paso sería diseccionar la hoja en diversos parámetros. Dependiendo de la fuente a la que se consulte pueden haber más o menos parámetros. Unos documentos aconsejan dividir la morfología de la hoja en 3 parámetros principales (Forma, Dentina y ramificación] y derivando de dichos parámetros convertirlos a 16 parámetros secundarios. La otra propuesta es dividir la morfología de la hoja en 5 parámetros principales (Diámetro, longitud fisiológica, anchura fisiológica, área y perímetro] y de ahí extraer 12 parámetros secundarios. Esta segunda propuesta es la más utilizada de todas por lo que es la que se utilizará. Pasamos al reconocimiento de hojas, en la cual nos hemos basado en un documento que provee un código fuente que cucando en los dos extremos de la hoja automáticamente nos dice a qué especie pertenece la hoja que estamos intentando reconocer. Para ello sólo hay que formar una base de datos. En los test realizados por el citado documento, nos aseguran que tiene un índice de acierto del 90.312% en 320 test en total (32 plantas insertadas en la base de datos por 10 test que se han realizado por cada una de las especies]. El siguiente apartado trata de la detección de cromosomas, en el cual se debe de pasar de la célula metafásica, donde los cromosomas están desorganizados, al cariotipo, que es como solemos ver los 23 cromosomas de forma ordenada. Hay dos tipos de técnicas para realizar este paso: Por el proceso de esquelotonización y barriendo ángulos. El proceso de esqueletonización consiste en eliminar los píxeles del interior del cromosoma para quedarse con su silueta; Este proceso es similar a los métodos de derivación de los píxeles pero se diferencia en que no detecta bordes si no que detecta el interior de los cromosomas. La segunda técnica consiste en ir barriendo ángulos desde el principio del cromosoma y teniendo en cuenta que un cromosoma no puede doblarse más de X grados detecta las diversas regiones de los cromosomas. Una vez tengamos el cariotipo, se continua con el reconocimiento de cromosomas. Para ello existe una técnica basada en las bandas de blancos y negros que tienen los cromosomas y que son las que los hacen únicos. Para ello el programa detecta los ejes longitudinales del cromosoma y reconstruye los perfiles de las bandas que posee el cromosoma y que lo identifican como único. En cuanto al trabajo que se podría desempeñar en el futuro, tenemos por lo general dos técnicas independientes que no unen la detección con el reconocimiento por lo que se habría de preparar un programa que uniese estas dos técnicas. Respecto a las hojas hemos visto que ambos métodos, detección y reconocimiento, están vinculados debido a que ambos comparten la opinión de dividir las hojas en 5 parámetros principales. El trabajo que habría que realizar sería el de crear un algoritmo que conectase a ambos ya que en el programa de reconocimiento se debe clicar a los dos extremos de la hoja por lo que no es una tarea automática. En cuanto a los cromosomas, se debería de crear un algoritmo que busque el inicio del cromosoma y entonces empiece a barrer ángulos para después poder dárselo al programa que busca los perfiles de bandas de los cromosomas. Finalmente, en el resumen se explica el por qué hace falta este tipo de investigación, esto es que con el calentamiento global, muchas de las especies (tanto animales como plantas] se están empezando a extinguir. Es por ello que se necesitará una base de datos que contemple todas las posibles especies tanto del reino animal como del reino vegetal. Para reconocer a una especie animal, simplemente bastará con tener sus 23 cromosomas; mientras que para reconocer a una especie vegetal, existen diversas formas. Aunque la más sencilla de todas es contar con la hoja de la especie puesto que es el elemento más fácil de escanear e introducir en el ordenador.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Members of the fibroblast growth factor (FGF) family play a critical role in embryonic lung development and adult lung physiology. The in vivo investigation of the role FGFs play in the adult lung has been hampered because the constitutive pulmonary expression of these factors often has deleterious effects and frequently results in neonatal lethality. To circumvent these shortcomings, we expressed FGF-3 in the lungs under the control of the progesterone antagonist-responsive binary transgenic system. Four binary transgenic lines were obtained that showed ligand-dependent induction of FGF-3 with induced levels of FGF-3 expression dependent on the levels of expression of the GLp65 regulator as well as the dose of the progesterone antagonist, RU486, administered. FGF-3 expression in the adult mouse lung resulted in two phenotypes depending on the levels of induction of FGF-3. Low levels of FGF-3 expression resulted in massive free alveolar macrophage infiltration. High levels of FGF-3 expression resulted in diffuse alveolar type II cell hyperplasia. Both phenotypes were reversible after the withdrawal of RU486. This system will be a valuable means of investigating the diverse roles of FGFs in the adult lung.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As society becomes increasingly less binary, and moves towards a more spectrum based approach to mental illness, medical illness, and personality, it becomes necessary to address this shift within formerly rigid institutions. This paper explores this shift as it is occurring within correctional settings around the United States concerning the medical care, housing, and safety of transgendered inmates. As there is no legal standard for the housing or access to gender-affirming medical care (i.e., hormone therapy, sexual reassignment surgery), these issues are addressed on an institutional level, with very little consistency throughout the country. Currently, most institutions follow a genitalia-based system of classification. Within the system, core beliefs are held, some adaptive and some no longer adaptive, that drive the system's behavior and outcomes. With regard to transgendered inmates, several underlying beliefs within the system serve to maintain the status quo; however, the most basic underpinning is the system's reliance on a binary gender system. As views of humane treatment of the incarcerated expand and modernize, the role of mental health within corrections has also expanded. Psychologists, social workers, counselors, and psychiatrists are found in almost all correctional facilities, and have become a voice of advocacy for an often underserved population.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

"The numbers refer to North Dakota Geological Survey Circular no. 5 (sixth-revision), well numbers and storage buildings."

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A sieve plate distillation column has been constructed and interfaced to a minicomputer with the necessary instrumentation for dynamic, estimation and control studies with special bearing on low-cost and noise-free instrumentation. A dynamic simulation of the column with a binary liquid system has been compiled using deterministic models that include fluid dynamics via Brambilla's equation for tray liquid holdup calculations. The simulation predictions have been tested experimentally under steady-state and transient conditions. The simulator's predictions of the tray temperatures have shown reasonably close agreement with the measured values under steady-state conditions and in the face of a step change in the feed rate. A method of extending linear filtering theory to highly nonlinear systems with very nonlinear measurement functional relationships has been proposed and tested by simulation on binary distillation. The simulation results have proved that the proposed methodology can overcome the typical instability problems associated with the Kalman filters. Three extended Kalman filters have been formulated and tested by simulation. The filters have been used to refine a much simplified model sequentially and to estimate parameters such as the unmeasured feed composition using information from the column simulation. It is first assumed that corrupted tray composition measurements are made available to the filter and then corrupted tray temperature measurements are accessed instead. The simulation results have demonstrated the powerful capability of the Kalman filters to overcome the typical hardware problems associated with the operation of on-line analyzers in relation to distillation dynamics and control by, in effect, replacirig them. A method of implementing estimator-aided feedforward (EAFF) control schemes has been proposed and tested by simulation on binary distillation. The results have shown that the EAFF scheme provides much better control and energy conservation than the conventional feedback temperature control in the face of a sustained step change in the feed rate or multiple changes in the feed rate, composition and temperature. Further extensions of this work are recommended as regards simulation, estimation and EAFF control.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The cationic polymerisation of various monomers, including cyclic ethers bearing energetic nitrate ester (-ON02) groups, substituted styrenes and isobutylene has been investigated. The main reaction studied has been the ring-opening polymerisation of 3- (nitratomethyl)-3-methyl oxetane (NIMMO) using the alcohol/BF3.0Et2 binary initiator system. A series of di-, tri- and tetrafunctional telechelic polymers has been synthesised. In order to optimise the system, achieve controlled molecular weight polymers and understand the mechanism of polymerisation the effects of certain parameters on the molecular weight distribution, as determined by Size Exclusion Chromatography, have been examined. This shows the molecular weight achieved depends on a combination of factors including -OH concentration, addition rate of monomer and, most importantly, temperature. A lower temperature and OH concentration tends to produce higher molecular weight, whereas, slower addition rates of monomer, either have no significant effect or produce a lower molecular weight polymer. These factors were used to increase the formation of a cyclic oligomer, by a side reaction, and suggest, that the polymerisation of NIMMO is complicated with endbiting and back biting reactions, along with other transfer/termination processes. These observations appear to fit the model of an active-chain end mechanism. Another cyclic monomer, glycidyl nitrate (GLYN), has been polymerised by the activated monomer mechanism. Various other monomers have been used to end-cap the polymer chains to produce hydroxy ends which are expected to form more stable urethane links, than the glycidyl nitrate ends, when cured with isocyanates. A novel monomer, butadiene oxide dinitrate (BODN), has been prepared and its homopolymerisation and copolymerisation with GL YN studied. In concurrent work the carbocationic polymerisations of isobutylene or substituted styrenes have been studied. Materials with narrow molecular weight distributions have been prepared using the diphenyl phosphate/BCl3 initiator. These systems and monomers are expected to be used in the synthesis of thermoplastic elastomers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article is an introduction to the use of relational calculi in deriving programs. Using the relational caluclus Ruby, we derive a functional program that adds one bit to a binary number to give a new binary number. The resulting program is unsurprising, being the standard $quot;column of half-adders$quot;, but the derivation illustrates a number of points about working with relations rather than with functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mathematical skills that we acquire during formal education mostly entail exact numerical processing. Besides this specifically human faculty, an additional system exists to represent and manipulate quantities in an approximate manner. We share this innate approximate number system (ANS) with other nonhuman animals and are able to use it to process large numerosities long before we can master the formal algorithms taught in school. Dehaene´s (1992) Triple Code Model (TCM) states that also after the onset of formal education, approximate processing is carried out in this analogue magnitude code no matter if the original problem was presented nonsymbolically or symbolically. Despite the wide acceptance of the model, most research only uses nonsymbolic tasks to assess ANS acuity. Due to this silent assumption that genuine approximation can only be tested with nonsymbolic presentations, up to now important implications in research domains of high practical relevance remain unclear, and existing potential is not fully exploited. For instance, it has been found that nonsymbolic approximation can predict math achievement one year later (Gilmore, McCarthy, & Spelke, 2010), that it is robust against the detrimental influence of learners´ socioeconomic status (SES), and that it is suited to foster performance in exact arithmetic in the short-term (Hyde, Khanum, & Spelke, 2014). We provided evidence that symbolic approximation might be equally and in some cases even better suited to generate predictions and foster more formal math skills independently of SES. In two longitudinal studies, we realized exact and approximate arithmetic tasks in both a nonsymbolic and a symbolic format. With first graders, we demonstrated that performance in symbolic approximation at the beginning of term was the only measure consistently not varying according to children´s SES, and among both approximate tasks it was the better predictor for math achievement at the end of first grade. In part, the strong connection seems to come about from mediation through ordinal skills. In two further experiments, we tested the suitability of both approximation formats to induce an arithmetic principle in elementary school children. We found that symbolic approximation was equally effective in making children exploit the additive law of commutativity in a subsequent formal task as a direct instruction. Nonsymbolic approximation on the other hand had no beneficial effect. The positive influence of the symbolic approximate induction was strongest in children just starting school and decreased with age. However, even third graders still profited from the induction. The results show that also symbolic problems can be processed as genuine approximation, but that beyond that they have their own specific value with regard to didactic-educational concerns. Our findings furthermore demonstrate that the two often con-founded factors ꞌformatꞌ and ꞌdemanded accuracyꞌ cannot be disentangled easily in first graders numerical understanding, but that children´s SES also influences existing interrelations between the different abilities tested here.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

A Monte Carlo study of the late time growth of L12-ordered domains in a fcc A3B binary alloy is presented. The energy of the alloy has been modeled by a nearest-neighbor interaction Ising Hamiltonian. The system exhibits a fourfold degenerated ground state and two kinds of interfaces separating ordered domains: flat and curved antiphase boundaries. Two different dynamics are used in the simulations: the standard atom-atom exchange mechanism and the more realistic vacancy-atom exchange mechanism. The results obtained by both methods are compared. In particular we study the time evolution of the excess energy, the structure factor and the mean distance between walls. In the case of atom-atom exchange mechanism anisotropic growth has been found: two characteristic lengths are needed in order to describe the evolution. Contrarily, with the vacancyatom exchange mechanism scaling with a single length holds. Results are contrasted with existing experiments in Cu3Au and theories for anisotropic growth.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Abstract This doctoral thesis concerns the active galactic nucleus (AGN) most often referred to with the catalogue number OJ287. The publications in the thesis present new discoveries of the system in the context of a supermassive binary black hole model. In addition, the introduction discusses general characteristics of the OJ287 system and the physical fundamentals behind these characteristics. The place of OJ287 in the hierarchy of known types of AGN is also discussed. The introduction presents a large selection of fundamental physics required to have a basic understanding of active galactic nuclei, binary black holes, relativistic jets and accretion disks. Particularly the general relativistic nature of the orbits of close binaries of supermassive black holes is explored with some detail. Analytic estimates of some of the general relativistic effects in such a binary are presented, as well as numerical methods to calculate the effects more precisely. It is also shown how these results can be applied to the OJ287 system. The binary orbit model forms the basis for models of the recurring optical outbursts in the OJ287 system. In the introduction, two physical outburst models are presented in some detail and compared. The radiation hydrodynamics of the outbursts are discussed and optical light curve predictions are derived. The precursor outbursts studied in Paper III are also presented, and tied into the model of OJ287. To complete the discussion of the observable features of OJ287, the nature of the relativistic jets in the system, and in active galactic nuclei in general, is discussed. Basic physics of relativistic jets are presented, with additional detail added in the form of helical jet models. The results of Papers II, IV and V concerning the jet of OJ287 are presented, and their relation to other facets of the binary black hole model is discussed. As a whole, the introduction serves as a guide, though terse, for the physics and numerical methods required to successfully understand and simulate a close binary of supermassive black holes. For this purpose, the introduction necessarily combines a large number of both fundamental and specific results from broad disciplines like general relativity and radiation hydrodynamics. With the material included in the introduction, the publications of the thesis, which present new results with a much narrower focus, can be readily understood. Of the publications, Paper I presents newly discovered optical data points for OJ287, detected on archival astronomical plates from the Harvard College Observatory. These data points show the 1900 outburst of OJ287 for the first time. In addition, new data points covering the 1913 outburst allowed the determination of the start of the outburst with more precision than was possible before. These outbursts were then successfully numerically modelled with an N-body simulation of the OJ287 binary and accretion disc. In Paper II, mechanisms for the spin-up of the secondary black hole in OJ287 via interaction with the primary accretion disc and the magnetic fields in the system are discussed. Timescales for spin-up and alignment via both processes are estimated. It is found that the secondary black hole likely has a high spin. Paper III reports a new outburst of OJ287 in March 2013. The outburst was found to be rather similar to the ones reported in 1993 and 2004. All these outbursts happened just before the main outburst season, and are called precursor outbursts. In this paper, a mechanism was proposed for the precursor outbursts, where the secondary black hole collides with a gas cloud in the primary accretion disc corona. From this, estimates of brightness and timescales for the precursor were derived, as well as a prediction of the timing of the next precursor outburst. In Paper IV, observations from the 2004–2006 OJ287 observing program are used to investigate the existence of short periodicities in OJ287. The existence of a _50 day quasiperiodic component is confirmed. In addition, statistically significant 250 day and 3.5 day periods are found. Primary black hole accretion of a spiral density wave in the accretion disc is proposed as the source of the 50 day period, with numerical simulations supporting these results. Lorentz contracted jet re-emission is then proposed as the reason for the 3.5 day timescale. Paper V fits optical observations and mm and cm radio observations of OJ287 with a helical jet model. The jet is found to have a spine–sheath structure, with the sheath having a much lower Lorentz gamma factor than the spine. The sheath opening angle and Lorentz factor, as well as the helical wavelength of the jet are reported for the first time. Tiivistelmä Tässä väitöskirjatutkimuksessa on keskitytty tutkimaan aktiivista galaksiydintä OJ287. Väitöskirjan osana olevat tieteelliset julkaisut esittelevät OJ287-systeemistä saatuja uusia tuloksia kaksoismusta-aukkomallin kontekstissa. Väitöskirjan johdannossa käsitellään OJ287:n yleisiä ominaisuuksia ja niitä fysikaalisia perusilmiöitä, jotka näiden ominaisuuksien taustalla vaikuttavat. Johdanto selvittää myös OJ287-järjestelmän sijoittumisen aktiivisten galaksiytimien hierarkiassa. Johdannossa käydään läpi joitakin perusfysiikan tuloksia, jotka ovat tarpeen aktiivisten galaksiydinten, mustien aukkojen binäärien, relativististen suihkujen ja kertymäkiekkojen ymmärtämiseksi. Kahden toisiaan kiertävän mustan aukon keskinäisen radan suhteellisuusteoreettiset perusteet käydään läpi yksityiskohtaisemmin. Johdannossa esitetään joitakin analyyttisiä tuloksia tällaisessa binäärissä havaittavista suhteellisuusteoreettisista ilmiöistä. Myös numeerisia menetelmiä näiden ilmiöiden tarkempaan laskemiseen esitellään. Tuloksia sovelletaan OJ287-systeemiin, ja verrataan havaintoihin. OJ287:n mustien aukkojen ratamalli muodostaa pohjan systeemin toistuvien optisten purkausten malleille. Johdannossa esitellään yksityiskohtaisemmin kaksi fysikaalista purkausmallia, ja vertaillaan niitä. Purkausten säteilyhydrodynamiikka käydään läpi, ja myös ennusteet purkausten valokäyrille johdetaan. Johdannossa esitellään myös Julkaisussa III johdettu prekursoripurkausten malli, ja osoitetaan sen sopivan yhteen OJ287:n binäärimallin kanssa. Johdanto esittelee myös relativististen suihkujen fysiikkaa sekä OJ287- systeemiin liittyen että aktiivisten galaksiydinten kontekstissa yleisesti. Relativististen suihkujen perusfysiikka esitellään, kuten myös malleja kierteisistä suihkuista. Julkaisujen II, IV ja V OJ287-systeemin suihkuja koskevat tulokset esitellään binäärimallin kontekstissa. Kokonaisuutena johdanto palvelee suppeana oppaana, joka esittelee tarvittavan fysiikan ja tarpeelliset numeeriset menetelmät mustien aukkojen binäärijärjestelmän ymmärtämiseen ja simulointiin. Tätä tarkoitusta varten johdanto yhdistää sekä perustuloksia että joitakin syvällisempiä tuloksia laajoilta fysiikan osa-alueilta kuten suhteellisuusteoriasta ja säteilyhydrodynamiikasta. Johdannon sisältämän materiaalin avulla väitöskirjan julkaisut, ja niiden esittämät tulokset, ovat hyvin ymmärrettävissä. Väitöskirjan julkaisuista ensimmäinen esittelee uusia OJ287-systeemistä saatuja havaintopisteitä, jotka on paikallistettu Harvardin yliopiston observatorion arkiston valokuvauslevyiltä. OJ287:n vuonna 1900 tapahtunut purkaus nähdään ensimmäistä kertaa näissä havaintopisteissä. Uudet havaintopisteet mahdollistivat myös vuoden 1913 purkauksen alun ajoittamisen tarkemmin kuin aiemmin oli mahdollista. Havaitut purkaukset mallinnettiin onnistuneesti simuloimalla OJ287-järjestelmän mustien aukkojen paria ja kertymäkiekkoa. Julkaisussa II käsitellään mekanismeja OJ287:n sekundäärisen mustan aukon spinin kasvamiseen vuorovaikutuksessa primäärin kertymäkiekon ja systeemin magneettikenttien kanssa. Julkaisussa arvioidaan maksimispinin saavuttamisen ja spinin suunnan vakiintumisen aikaskaalat kummallakin mekanismilla. Tutkimuksessa havaitaan sekundäärin spinin olevan todennäköisesti suuri. Julkaisu III esittelee OJ287-systeemissä maaliskuussa 2013 tapahtuneen purkauksen. Purkauksen havaittiin muistuttavan vuosina 1993 ja 2004 tapahtuneita purkauksia, joita kutsutaan yhteisnimityksellä prekursoripurkaus (precursor outburst). Julkaisussa esitellään purkauksen synnylle mekanismi, jossa OJ287-systeemin sekundäärinen musta aukko osuu primäärisen mustan aukon kertymäkiekon koronassa olevaan kaasupilveen. Mekanismin avulla johdetaan arviot prekursoripurkausten kirkkaudelle ja aikaskaalalle. Julkaisussa johdetaan myös ennuste seuraavan prekursoripurkauksen ajankohdalle. Julkaisussa IV käytetään vuosina 2004–2006 kerättyjä havaintoja OJ287- systeemistä lyhyiden jaksollisuuksien etsintään. Julkaisussa varmennetaan systeemissä esiintyvä n. 50 päivän kvasiperiodisuus. Lisäksi tilastollisesti merkittävät 250 päivän ja 3,5 päivän jaksollisuudet havaitaan. Julkaisussa esitetään malli, jossa primäärisen mustan aukon kertymäkiekossa oleva spiraalitiheysaalto aiheuttaa 50 päivän jaksollisuuden. Mallista tehty numeerinen simulaatio tukee tulosta. Systeemin relativistisen suihkun emittoima aikadilatoitunut säteily esitetään aiheuttajaksi 3,5 päivän jaksollisuusaikaskaalalle. Julkaisussa V sovitetaan kierresuihkumalli OJ287-systeemistä tehtyihin optisiin havaintoihin ja millimetri- sekä senttimetriaallonpituuden radiohavaintoihin. Suihkun rakenteen havaitaan olevan kaksijakoinen ja koostuvan ytimestä ja kuoresta. Suihkun kuorella on merkittävästi pienempi Lorentzin gamma-tekijä kuin suihkun ytimellä. Kuoren avautumiskulma ja Lorentztekijä sekä suihkun kierteen aallonpituus raportoidaan julkaisussa ensimmäistä kertaa.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

A Monte Carlo study of the late time growth of L12-ordered domains in a fcc A3B binary alloy is presented. The energy of the alloy has been modeled by a nearest-neighbor interaction Ising Hamiltonian. The system exhibits a fourfold degenerated ground state and two kinds of interfaces separating ordered domains: flat and curved antiphase boundaries. Two different dynamics are used in the simulations: the standard atom-atom exchange mechanism and the more realistic vacancy-atom exchange mechanism. The results obtained by both methods are compared. In particular we study the time evolution of the excess energy, the structure factor and the mean distance between walls. In the case of atom-atom exchange mechanism anisotropic growth has been found: two characteristic lengths are needed in order to describe the evolution. Contrarily, with the vacancyatom exchange mechanism scaling with a single length holds. Results are contrasted with existing experiments in Cu3Au and theories for anisotropic growth.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The mesoporous SBA-15 silica with uniform hexagonal pore, narrow pore size distribution and tuneable pore diameter was organofunctionalized with glutaraldehyde-bridged silylating agent. The precursor and its derivative silicas were ibuprofen-loaded for controlled delivery in simulated biological fluids. The synthesized silicas were characterized by elemental analysis, infrared spectroscopy, (13)C and (29)Si solid state NMR spectroscopy, nitrogen adsorption, X-ray diffractometry, thermogravimetry and scanning electron microscopy. Surface functionalization with amine containing bridged hydrophobic structure resulted in significantly decreased surface area from 802.4 to 63.0 m(2) g(-1) and pore diameter 8.0-6.0 nm, which ultimately increased the drug-loading capacity from 18.0% up to 28.3% and a very slow release rate of ibuprofen over the period of 72.5h. The in vitro drug release demonstrated that SBA-15 presented the fastest release from 25% to 27% and SBA-15GA gave near 10% of drug release in all fluids during 72.5 h. The Korsmeyer-Peppas model better fits the release data with the Fickian diffusion mechanism and zero order kinetics for synthesized mesoporous silicas. Both pore sizes and hydrophobicity influenced the rate of the release process, indicating that the chemically modified silica can be suggested to design formulation of slow and constant release over a defined period, to avoid repeated administration.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We consider a binary Bose-Einstein condensate (BEC) described by a system of two-dimensional (2D) Gross-Pitaevskii equations with the harmonic-oscillator trapping potential. The intraspecies interactions are attractive, while the interaction between the species may have either sign. The same model applies to the copropagation of bimodal beams in photonic-crystal fibers. We consider a family of trapped hidden-vorticity (HV) modes in the form of bound states of two components with opposite vorticities S(1,2) = +/- 1, the total angular momentum being zero. A challenging problem is the stability of the HV modes. By means of a linear-stability analysis and direct simulations, stability domains are identified in a relevant parameter plane. In direct simulations, stable HV modes feature robustness against large perturbations, while unstable ones split into fragments whose number is identical to the azimuthal index of the fastest growing perturbation eigenmode. Conditions allowing for the creation of the HV modes in the experiment are discussed too. For comparison, a similar but simpler problem is studied in an analytical form, viz., the modulational instability of an HV state in a one-dimensional (1D) system with periodic boundary conditions (this system models a counterflow in a binary BEC mixture loaded into a toroidal trap or a bimodal optical beam coupled into a cylindrical shell). We demonstrate that the stabilization of the 1D HV modes is impossible, which stresses the significance of the stabilization of the HV modes in the 2D setting.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

CoB, CO(2)B, CoSi, Co(2)Si and CO(5)Si(2)B phases can be formed during heat-treatment of amorphous co-Si-B soft magnetic materials. Thus, it is important to determine their magnetic behavior as a function of applied field and temperature. In this study, polycrystalline single-phase samples of the above phases were produced via arc melting and heat-treatment under argon. The single-phase nature of the samples was confirmed via X-ray diffraction experiments. AC and DC magnetization measurements showed that Co(2)Si and CO(5)Si(2)B phases are paramagnetic. Minor amounts of either Co(2)Si or CoSi(2) in the CoSi-phase sample suggested a paramagnetic behavior of the CoSi-phase, however, it should be diamagnetic as shown in the literature. The diamagnetic behavior of the CoB phase was also confirmed. The paramagnetic behavior of CO(5)Si(2)B is for the first time reported. The magnetization results of the phase CO(2)B have a ferromagnetic signature already verified on previous NMR studies. A detailed set of magnetization measurements of this phase showed a change of the easy magnetization axis starting at 70K, with a temperature interval of about 13K at a very small field of 1 mT. As the strength of the field is increased the temperature interval is enlarged. The strength of field at which the magnetization saturates increases almost linearly as the temperature is increased above 70K. The room temperature total magnetostriction of the CO(2)B phase was determined to be 8 ppm at a field of 1T. (C) 2010 Elsevier B.V. All rights reserved.