954 resultados para quantum corrections to solitons
Resumo:
Demersal fisheries targeting a few high-value species often catch and discard other "non-target" species. It is difficult to quantify the impact of this incidental mortality when population biomass of a non-target species is unknown. We calculate biomass for 14 demersal fish species in ICES Area VIIg (Celtic Sea) by applying species-and length-based catchability corrections to catch records from the Irish Groundfish Survey (IGFS). We then combine these biomass estimates with records of commercial discards (and landings for marketable non-target species) to calculate annual harvesting rates (HR) for each study species. Uncertainty is incorporated into estimates of both biomass andHR. Our survey-based HR estimates for cod and whiting compared well with HR-converted fishing mortality (F) estimates from analytical assessments for these two stocks. Of the non-target species tested, red gurnard (Chelidonichthys cuculus) recorded some annual HRs greater than those for cod or whiting; challenging "Pope's postulate" that F on non-target stocks in an assemblage will not exceed that on target stocks. We relate HR for each species to two corresponding maximum sustainable yield (MSY) reference levels; six non-target species (including three ray species) show annual HRs >= HRMSY. This result suggests that it may not be possible to conserve vulnerable non-target species when F is coupled to that of target species. Based on biomass, HR, and HRMSY, we estimate "total allowable catch" for each non-target species.
Resumo:
We provide an algorithm that automatically derives many provable theorems in the equational theory of allegories. This was accomplished by noticing properties of an existing decision algorithm that could be extended to provide a derivation in addition to a decision certificate. We also suggest improvements and corrections to previous research in order to motivate further work on a complete derivation mechanism. The results presented here are significant for those interested in relational theories, since we essentially have a subtheory where automatic proof-generation is possible. This is also relevant to program verification since relations are well-suited to describe the behaviour of computer programs. It is likely that extensions of the theory of allegories are also decidable and possibly suitable for further expansions of the algorithm presented here.
Resumo:
Depuis l’introduction de la mécanique quantique, plusieurs mystères de la nature ont trouvé leurs explications. De plus en plus, les concepts de la mécanique quantique se sont entremêlés avec d’autres de la théorie de la complexité du calcul. De nouvelles idées et solutions ont été découvertes et élaborées dans le but de résoudre ces problèmes informatiques. En particulier, la mécanique quantique a secoué plusieurs preuves de sécurité de protocoles classiques. Dans ce m´emoire, nous faisons un étalage de résultats récents de l’implication de la mécanique quantique sur la complexité du calcul, et cela plus précisément dans le cas de classes avec interaction. Nous présentons ces travaux de recherches avec la nomenclature des jeux à information imparfaite avec coopération. Nous exposons les différences entre les théories classiques, quantiques et non-signalantes et les démontrons par l’exemple du jeu à cycle impair. Nous centralisons notre attention autour de deux grands thèmes : l’effet sur un jeu de l’ajout de joueurs et de la répétition parallèle. Nous observons que l’effet de ces modifications a des conséquences très différentes en fonction de la théorie physique considérée.
Resumo:
A l’époque de la Nouvelle-France, il n’était pas rare que des enfants de moins d’un an décèdent. Les parents acceptaient avec sagesse et résignation le décès de leurs enfants. Telle était la volonté du Tout-Puissant. Grâce au Registre de la Population du Québec Ancien (R.P.Q.A.) élaboré par le Programme de Recherche en Démographie Historique (P.R.D.H), l’ampleur de la mortalité infantile a pu être mesurée selon plusieurs critères, quelques facteurs déterminants examinés ainsi qu’une composante intergénérationnelle identifiée. Couvrant pour la première fois la totalité de l’existence de la colonie, nos résultats confirment l’importance de la mortalité des enfants aux XVIIe et XVIIIe siècles (entre 140 et 260‰ avant correction pour le sous-enregistrement des décès). Des disparités tangibles ont été constatées entre les sexes, selon le lieu de naissance ainsi que selon la catégorie professionnelle à laquelle appartient le père de l’enfant. L’inégalité des probabilités de survie des tout-petits reflète l’iniquité physiologique entre les genres, avec une surmortalité masculine de l’ordre de 20%, et l’influence de l’environnement dans lequel vit la famille : les petits de la ville de Québec décédaient en moyenne 1,5 à 1,2 fois plus que les petits des campagnes. Montréal, véritable hécatombe pour l’instant inexpliquée, perdait 50% de ses enfants avant l’âge d’un an, ce qui représente 1,9 fois plus de décès infantiles que ceux des enfants de la campagne, qui jouissent malgré tout des bienfaits de leur environnement. Les effets délétères de l’usage de la mise en nourrice, qui touche plus de la moitié des enfants des classes aisées citadines, ravagent leur descendance de plus en plus profondément. L’examen de la mortalité infantile sous ses composantes endogène et exogène révèle que la mortalité de causes exogènes explique au moins 70% de tous les décès infantiles. La récurrence des maladies infectieuses, l’absence d’hygiène personnelle, l’insalubrité des villes constituaient autant de dangers pour les enfants. Dans une perspective davantage familiale et intergénérationnelle où l’enfant est partie intégrante d’une fratrie, des risques significatifs ont été obtenus pour plusieurs caractéristiques déterminantes. Les mères de moins de 20 ans ou de plus de 30 ans, les enfants de rang de naissance supérieur à 8, un intervalle intergénésique inférieur à 21 mois ou avoir son aîné décédé accroissent les risques de décéder avant le premier anniversaire de l’ordre de 10 à 70%, parce que le destin d’un enfant n’est pas indépendant des caractéristiques de sa mère ou de sa fratrie. Nous avons aussi constaté une relation positive entre la mortalité infantile expérimentée par une mère et celle de ses filles. La distribution observée des filles ayant perdu au moins 40% de leurs enfants au même titre que leur mère est 1,3 à 1,9 fois plus grande que celle attendue pour les filles ayant eu 9 enfants et moins ou 10 enfants et plus. Il existerait une transmission intergénérationnelle de la mortalité infantile même lorsqu’on contrôle pour la période et la taille de la famille.
Resumo:
Thèse réalisée en cotutelle avec l'Université Catholique de Louvain (Belgique)
Resumo:
We study the static properties of the Little model with asymmetric couplings. We show that the thermodynamics of this model coincides with that of the Sherrington-Kirkpatrick model, and we compute the main finite-size corrections to the difference of the free energy between these two models and to some clarifying order parameters. Our results agree with numerical simulations. Numerical results are presented for the symmetric Little model, which show that the same conclusions are also valid in this case.
Resumo:
We study the properties of the 1S0 pairing gap in low-density neutron matter. Different corrections to the lowest-order scattering length approximation are explored, resulting in a strong suppression with respect to the BCS result.
Resumo:
By the end of the first day of embryonic development, zebrafish primordial germ cells (PGCs) arrive at the site where the gonad develops. In our study we investigated the mechanisms controlling the precision of primordial germ cell arrival at their target. We found that in contrast with our expectations which were based on findings in Drosophila and mouse, the endoderm does not constitute a preferred migration substrate for the PGCs. Rather, endoderm derivatives are important for later stages of organogenesis keeping the PGC clusters separated. It would be interesting to investigate the precise mechanism by which endoderm controls germ cell position in the gonad. In their migration towards the gonad, zebrafish germ cells follow the gradient of chemokine SDF-1a, which they detect using the receptor CXCR4b that is expressed on their membrane. Here we show that the C-terminal region of CXCR4b is responsible for down-regulation of receptor activity as well as for receptor internalization. We demonstrate that receptor molecules unable to internalize are less potent in guiding germ cells to the site where the gonad develops, thereby implicating chemokine receptor internalization in facilitating precision of migration during chemotaxis in vivo. We demonstrate that while CXCR4b activity positively regulates the duration of the active migration phases, the down-regulation of CXCR4b signalling by internalization limits the duration of this phase. This way, receptor signalling contributes to the persistence of germ cell migration, whereas receptor down-regulation enables the cells to stop and correct their migration path close to the target where germ cells encounter the highest chemokine signal. Chemokine receptors are involved in directing cell migration in different processes such as lymphocyte trafficking, cancer and in the development of the vascular system. The C-terminal domain of many chemokine receptors was shown to be essential for controlling receptor signalling and internalization. It would therefore be important to determine whether the role for receptor internalization in vivo as described here (allowing periodical corrections to the migration route) and the mechanisms involved (reducing the level of signalling) apply for those other events, too.
Resumo:
The interaction of short intense laser pulses with atoms/molecules produces a multitude of highly nonlinear processes requiring a non-perturbative treatment. Detailed study of these highly nonlinear processes by numerically solving the time-dependent Schrodinger equation becomes a daunting task when the number of degrees of freedom is large. Also the coupling between the electronic and nuclear degrees of freedom further aggravates the computational problems. In the present work we show that the time-dependent Hartree (TDH) approximation, which neglects the correlation effects, gives unreliable description of the system dynamics both in the absence and presence of an external field. A theoretical framework is required that treats the electrons and nuclei on equal footing and fully quantum mechanically. To address this issue we discuss two approaches, namely the multicomponent density functional theory (MCDFT) and the multiconfiguration time-dependent Hartree (MCTDH) method, that go beyond the TDH approximation and describe the correlated electron-nuclear dynamics accurately. In the MCDFT framework, where the time-dependent electronic and nuclear densities are the basic variables, we discuss an algorithm to calculate the exact Kohn-Sham (KS) potentials for small model systems. By simulating the photodissociation process in a model hydrogen molecular ion, we show that the exact KS potentials contain all the many-body effects and give an insight into the system dynamics. In the MCTDH approach, the wave function is expanded as a sum of products of single-particle functions (SPFs). The MCTDH method is able to describe the electron-nuclear correlation effects as the SPFs and the expansion coefficients evolve in time and give an accurate description of the system dynamics. We show that the MCTDH method is suitable to study a variety of processes such as the fragmentation of molecules, high-order harmonic generation, the two-center interference effect, and the lochfrass effect. We discuss these phenomena in a model hydrogen molecular ion and a model hydrogen molecule. Inclusion of absorbing boundaries in the mean-field approximation and its consequences are discussed using the model hydrogen molecular ion. To this end, two types of calculations are considered: (i) a variational approach with a complex absorbing potential included in the full many-particle Hamiltonian and (ii) an approach in the spirit of time-dependent density functional theory (TDDFT), including complex absorbing potentials in the single-particle equations. It is elucidated that for small grids the TDDFT approach is superior to the variational approach.
Resumo:
Finding and replacing text in an MS Word 2010 document can save time when you need to make corrections to a word or phrase throughout your file. For best viewing Download the video.
Resumo:
Finding and replacing text in a document can save time when you need to make corrections to a word throughout your file. For best viewing Download the video.
Resumo:
La tesis tracta diferents aspectes relacionats amb el càlcul de la semblança quàntica, així com la seva aplicació en la racionalització i predicció de l'activitat de fàrmacs. Es poden destacar dos progressos importants en el desenvolupament de noves metodologies que faciliten el càlcul de les mesures de semblança quàntica. En primer lloc, la descripció de les molècules mitjançant les funciones densitat aproximades PASA (Promolecular Atomic Shell Approximation) ha permès descriure amb suficient precisió la densitat electrònica dels sistemes moleculars analitzats, reduint substancialment el temps de càlcul de les mesures de semblança. En segon lloc, el desenvolupament de tècniques de superposició molecular específiques de les mesures de semblança quàntica ha permès resoldre el problema de l'alineament en l'espai dels compostos comparats. El perfeccionament d'aquests nous procediments i algoritmes matemàtics associats a les mesures de semblança molecular quàntica, ha estat essencial per poder progressar en diferents disciplines de la química computacional, sobretot les relacionades amb les anàlisis quantitatives entre les estructures moleculars i les seves activitats biològiques, conegudes amb les sigles angleses QSAR (Quantitative Structure-Activity Relationships). Precisament en l'àrea de les relacions estructura-activitat s'han presentat dues aproximacions fonamentades en la semblança molecular quàntica que s'originen a partir de dues representacions diferents de les molècules. La primera descripció considera la densitat electrònica global de les molècules i és important, entre altres, la disposició dels objectes comparats en l'espai i la seva conformació tridimensional. El resultat és una matriu de semblança amb les mesures de semblança de tots els parells de compostos que formen el conjunt estudiat. La segona descripció es fonamenta en la partició de la densitat global de les molècules en fragments. S'utilitzen mesures d'autosemblança per analitzar els requeriments bàsics d'una determinada activitat des del punt de vista de la semblança quàntica. El procés permet la detecció de les regions moleculars que són responsables d'una alta resposta biològica. Això permet obtenir un patró amb les regions actives que és d'evident interès per als propòsits del disseny de fàrmacs. En definitiva, s'ha comprovat que mitjançant la simulació i manipulació informàtica de les molècules en tres dimensions es pot obtenir una informació essencial en l'estudi de la interacció entre els fàrmacs i els seus receptors macromoleculars.
Resumo:
A dificuldade na produção da língua portuguesa escrita e a constante desmotivação dos alunos para escrever tem sido a grande preocupação dos educadores. Seguimos como fundamentação teórica da língua portuguesa escrita o interacionismo sociodiscursivo na abordagem da teoria de Bronckart e Bakhtin, o gênero discursivo segundo Bakhtin e a teoria do Habitus na concepção de Bourdieu, na intenção de reconhecer na língua escrita a manifestação concreta da competência linguística dos textos escolares e dos contextos sociais. Nessa perspectiva pretendemos analisar a prática interacionista sociodiscursiva e cognitiva da língua portuguesa escrita a partir dos textos dos alunos do ensino médio, e da reescrita dos textos de acordo com as orientações realizadas pelo professor. Conhecer o interacionismo sociodiscursivo na prática, nas produções textuais escritas dos alunos, foi o grande desafio da pesquisa. A partir da parte empírica foi possível verificar que os textos reescritos apresentaram avanço em vários aspectos, já que alguns alunos conseguiram sobressair em quesitos esperados pelo professor, mas outros alunos permaneceram presos ao primeiro texto sem modificação na estrutura ou argumentação, limitando-se às correções gramaticais. Nenhum aluno ousou recorrer a numa nova argumentação, ou ainda a uma nova estratégia em defesa dos argumentos propostos. Percebese a necessidade de novos caminhos que direcionem o professor em suas correções textuais capazes de motivarem os alunos a refletirem e que permitam que os alunos recorram à interação com o professor para melhorarem de maneira consciente as suas produções textuais.
Resumo:
Alternative meshes of the sphere and adaptive mesh refinement could be immensely beneficial for weather and climate forecasts, but it is not clear how mesh refinement should be achieved. A finite-volume model that solves the shallow-water equations on any mesh of the surface of the sphere is presented. The accuracy and cost effectiveness of four quasi-uniform meshes of the sphere are compared: a cubed sphere, reduced latitude–longitude, hexagonal–icosahedral, and triangular–icosahedral. On some standard shallow-water tests, the hexagonal–icosahedral mesh performs best and the reduced latitude–longitude mesh performs well only when the flow is aligned with the mesh. The inclusion of a refined mesh over a disc-shaped region is achieved using either gradual Delaunay, gradual Voronoi, or abrupt 2:1 block-structured refinement. These refined regions can actually degrade global accuracy, presumably because of changes in wave dispersion where the mesh is highly nonuniform. However, using gradual refinement to resolve a mountain in an otherwise coarse mesh can improve accuracy for the same cost. The model prognostic variables are height and momentum collocated at cell centers, and (to remove grid-scale oscillations of the A grid) the mass flux between cells is advanced from the old momentum using the momentum equation. Quadratic and upwind biased cubic differencing methods are used as explicit corrections to a fast implicit solution that uses linear differencing.
Resumo:
Recent observations from the Argo dataset of temperature and salinity profiles are used to evaluate a series of 3-year data assimilation experiments in a global ice–ocean general circulation model. The experiments are designed to evaluate a new data assimilation system whereby salinity is assimilated along isotherms, S(T ). In addition, the role of a balancing salinity increment to maintain water mass properties is investigated. This balancing increment is found to effectively prevent spurious mixing in tropical regions induced by univariate temperature assimilation, allowing the correction of isotherm geometries without adversely influencing temperature–salinity relationships. In addition, the balancing increment is able to correct a fresh bias associated with a weak subtropical gyre in the North Atlantic using only temperature observations. The S(T ) assimilation method is found to provide an important improvement over conventional depth level assimilation, with lower root-mean-squared forecast errors over the upper 500 m in the tropical Atlantic and Pacific Oceans. An additional set of experiments is performed whereby Argo data are withheld and used for independent evaluation. The most significant improvements from Argo assimilation are found in less well-observed regions (Indian, South Atlantic and South Pacific Oceans). When Argo salinity data are assimilated in addition to temperature, improvements to modelled temperature fields are obtained due to corrections to model density gradients and the resulting circulation. It is found that observations from the Argo array provide an invaluable tool for both correcting modelled water mass properties through data assimilation and for evaluating the assimilation methods themselves.