911 resultados para Cipher and telegraph codes
Resumo:
Radiative decay processes at cold and ultra cold temperatures for sulfur atoms colliding with protons are investigated. The MOLPRO quantum chemistry suite of codes was used to obtain accurate potential energies and transition dipole moments, as a function of internuclear distance, between low-lying states of the SH+ molecular cation. A multi-reference configuration-interaction approximation together with the Davidson correction is used to determine the potential energy curves and transition dipole moments, between the states of interest, where the molecular orbitals are obtained from state-averaged multi-configuration-self-consistent field calculations. The collision problem is solved approximately using an optical potential method to obtain radiative loss, and a fully two-channel quantum approach for radiative charge transfer. Cross sections and rate coefficients are determined for the first time for temperatures ranging from 10 μK up to 10 000 K. Results are obtained for all isotopes of sulfur, colliding with H+ and D+ ions and comparison is made to a number of other collision systems.
Resumo:
Energy levels, radiative rates and lifetimes are calculated among the lowest 98 levels of the n ≤4 configurations of Be-like Al X. The GRASP (General-purpose Relativistic Atomic Structure Package) is adopted and data are provided for all E1, E2, M1 and M2 transitions. Similar data are also obtained with the FAC (Flexible Atomic Code) to assess the accuracy of the calculations. Based on comparisons between calculations with the two codes as well as with available measurements, our listed energy levels are assessed to be accurate to better than 0.3 per cent. However, the accuracy for radiative rates and lifetimes is estimated to be about 20 per cent. Collision strengths are also calculated for which the DARC (Dirac Atomic R-matrix Code) is used. A wide energy range (up to 380 Ryd) is considered and resonances resolved in a fine energy mesh in the thresholds region. The collision strengths are subsequently averaged over a Maxwellian velocity distribution to determine effective collision strengths up to a temperature of 1.6 × 107 K. Our results are compared with the previous (limited) atomic data and significant differences (up to a factor of 4) are noted for several transitions, particularly those which are not allowed in jj coupling.
Resumo:
We report calculations of energy levels, radiative rates, oscillator strengths and line strengths for transitions among the lowest 231 levels of Ti VII. The general-purpose relativistic atomic structure package and flexible atomic code are adopted for the calculations. Radiative rates, oscillator strengths and line strengths are provided for all electric dipole (E1), magnetic dipole (M1), electric quadrupole (E2) and magnetic quadrupole (M2) transitions among the 231 levels, although calculations have been performed for a much larger number of levels (159 162). In addition, lifetimes for all 231 levels are listed. Comparisons are made with existing results and the accuracy of the data is assessed. In particular, the most recent calculations reported by Singh et al (2012 Can. J. Phys. 90 833) are found to be unreliable, with discrepancies for energy levels of up to 1 Ryd and for radiative rates of up to five orders of magnitude for several transitions, particularly the weaker ones. Based on several comparisons among a variety of calculations with two independent codes, as well as with the earlier results, our listed energy levels are estimated to be accurate to better than 1% (within 0.1 Ryd), whereas results for radiative rates and other related parameters should be accurate to better than 20%.
Resumo:
his essay is premised on the following: a conspiracy to fix or otherwise manipulate the outcome of a sporting event for profitable purpose. That conspiracy is in turn predicated on the conspirators’ capacity to: (a) ensure that the fix takes place as pre-determined; (b) manipulate the betting markets that surround the sporting event in question; and (c) collect their winnings undetected by either the betting industry’s security systems or the attention of any national regulatory body or law enforcement agency.
Unlike many essays on this topic, this contribution does not focus on the “fix”– part (a) of the above equation. It does not seek to explain how or why a participant or sports official might facilitate a betting scam through either on-field behaviour that manipulates the outcome of a game or by presenting others with privileged inside information in advance of a game. Neither does this contribution seek to give any real insight into the second part of the above equation: how such conspirators manipulate a sports betting market by playing or laying the handicap or in-play or other offered betting odds. In fact, this contribution is not really about the mechanics of sports betting or match fixing at all; rather it is about the sometimes under explained reason why match fixing has reportedly become increasingly attractive as of late to international crime syndicates. That reason relates to the fact that given the traditional liquidity of gambling markets, sports betting can, and has long been, an attractively accessible conduit for criminal syndicates to launder the proceeds of crime. Accordingly, the term “winnings”, noted in part (c) of the above equation, takes on an altogether more nefarious meaning.
This essay’s attempt to review the possible links between match fixing in sport, gambling-related “winnings” and money laundering is presented in four parts.
First, some context will be given to what is meant by money laundering, how it is currently policed internationally and, most importantly, how the growth of online gambling presents a unique set of vulnerabilities and opportunities to launder the proceeds of crime. The globalisation of organised crime, sports betting and transnational financial services now means that money laundering opportunities have moved well beyond a flutter on the horses at your local racetrack or at the roulette table of your nearest casino. The growth of online gambling platforms means that at a click it is possible for the proceeds of crime in one jurisdiction to be placed on a betting market in another jurisdiction with the winnings drawn down and laundered in a third jurisdiction and thus the internationalisation of gambling-related money laundering threatens the integrity of sport globally.
Second, and referring back to the infamous hearings of the US Senate Special Committee to Investigate Organised Crime in Interstate Commerce of the early 1950s, (“the Kefauver Committee”), this article will begin by illustrating the long standing interest of organised crime gangs – in this instance, various Mafia families in the United States – in money laundering via sports gambling-related means.
Third, and using the seminal 2009 report “Money Laundering through the Football Sector” by the Financial Action Task Force (FATF, an inter-governmental body established in 1989 to promote effective implementation of legal, regulatory and operational measures for combating money laundering, terrorist financing and other related threats to the integrity of the international financial system), this essay seeks to assess the vulnerabilities of international sport to match fixing, as motivated in part by the associated secondary criminality of tax evasion and transnational economic crime.
The fourth and concluding parts of the essay spin from problems to possible solutions. The underlying premise here is that heretofore there has been an insularity to the way that sports organisations have both conceptualised and sought to address the match fixing threat e.g., if we (in sport) initiate player education programmes; establish integrity units; enforce codes of conduct and sanctions strictly; then our integrity or brand should be protected. This essay argues that, although these initiatives are important, the source and process of match fixing is beyond sport’s current capacity, as are the possible solutions.
Resumo:
Molecular Medicine and Molecular Pathology are integral parts of Haematology as we enter the new millennium. Their origins can be linked to fundamental developments in the basic sciences, particularly genetics, chemistry and biochemistry. The structure of DNA and the genetic code that it encrypts are the critical starting points to our understanding of these new disciplines. The genetic alphabet is a simple one, consisting of just 4 letters, buts its influence is crucial to human development and differentiation. The concept of a gene is not a new one but the Human Genome Project (a joint world-wide effort to characterise our entire genetic make-up) is providing an invaluable understanding of how genes function in normal cellular processes and pinpointing how disruption of these processes can lead to disease. Transcription and translation are the key events by which our genotype is converted to our phenotype (via a messenger RNA intermediate), producing the myriad proteins and enzymes which populate the cellular factory of our body. Unlike the bacterial or prokaryotic genome, the human genome contains a large amount of non coding DNA (less than 1% of our genome codes for proteins), and our genes are interrupted, with the coding regions or exons separated by non coding introns. Precise removal of the intronic material after transcription (though a process called splicing) is critical for efficient translation to occur. Incorrect splicing can lead to the generation of mutant proteins, which can have a dilaterious effect on the phenotype of the individual. Thus the 100,000-200,000 genes which are present in each cell in our body have a defined control mechanism permitting efficient and appropriate expression of proteins and enzymes and yet a single base change in just one of those genes can lead to diseases such as haemophilia or fanconis anaemia.
Resumo:
In this paper, we investigate the impact of faulty memory bit-cells on the performance of LDPC and Turbo channel decoders based on realistic memory failure models. Our study investigates the inherent error resilience of such codes to potential memory faults affecting the decoding process. We develop two mitigation mechanisms that reduce the impact of memory faults rather than correcting every single error. We show how protection of only few bit-cells is sufficient to deal with high defect rates. In addition, we show how the use of repair-iterations specifically helps mitigating the impact of faults that occur inside the decoder itself.
Resumo:
Over the last decade an Auburn-Rollins-Strathclyde consortium has developed several suites of parallel R-matrix codes [1, 2, 3] that can meet the fundamental data needs required for the interpretation of astrophysical observation and/or plasma experiments. Traditionally our collisional work on light fusion-related atoms has been focused towards spectroscopy and impurity transport for magnetically confined fusion devices. Our approach has been to provide a comprehensive data set for the excitation/ionization for every ion stage of a particular element. As we progress towards a burning fusion plasma, there is a demand for the collisional processes involving tungsten, which has required a revitalization of the relativistic R-matrix approach. The implementation of these codes on massively parallel supercomputers has facilitated the progression to models involving thousands of levels in the close-coupling expansion required by the open d and f sub-shell systems of mid Z tungsten. This work also complements the electron-impact excitation of Fe-Peak elements required by astrophysics, in particular the near neutral species, which offer similar atomic structure challenges. Although electron-impact excitation work is our primary focus in terms of fusion application, the single photon photoionisation codes are also being developed in tandem, and benefit greatly from this ongoing work.
Resumo:
Background: Field placement experiences are frequently cited in the literature as having most impact on a student social worker’s learning as they emerge into the profession. Placements are integral to the development of practice competence and in acquiring a sense of social work identity. However research on the effectiveness of educational strategies used to deliver learning and assess competence during placement are scarce. Internationally, pressures to meet increasing numbers of student enrolments have raised concerns about the potential impact on the quality of placements and practice teaching provided. These pressures may also impact on the appropriate transfer and application of learning to the student’s practice.
Aim: To identify learning activities rated most useful for developing professional practice competence and professional identity of social work students.
Method: Data were collected from 396 students who successfully completed their first or final placement during 2013-2014 and were registered at one of two Universities in Northern Ireland. Students completed a self-administered questionnaire which covered: placement setting and service user group; type of supervision model; frequency of undertaking specific learning activities; who provided the learning; which activities contributed to their developing professional competence and identity and their overall satisfaction.
Our findings confirmed the centrality of the supervisory relationship as the vehicle to enable quality student learning. Shadowing others, receiving regular supervision and receiving constructive feedback were the tasks that students reported as ‘most useful’ to developing professional identity, competence and readiness to practice. Disturbingly over 50% of students reported that linking practice to the professional codes, practice foci and key roles were not valued as ‘useful’ in terms of readiness to practice, feeling competent and developing professional social work identity. These results offer strong insights into how both the University and the practice placement environment needs to better prepare, assess and support students during practice placements in the field.
Resumo:
Photoionization cross-sections are obtained using the relativistic DiracAtomic R-matrix Codes (DARC) for all valence and L-shell energy ranges between 27 and 270 eV. A total of 557 levels arising from the dominant configurations 3s23p4, 3s3p5, 3p6, 3s23p3[3d, 4s, 4p], 3p53d, 3s23p23d2, 3s3p43d, 3s3p33d2 and 2s22p53s23p5 have been included in the targetwavefunction representation of the Ar III ion, including up to 4p in the orbital basis. We also performed a smaller Breit-Pauli (BP) calculation containing the lowest 124 levels. Direct comparisons are made with previous theoretical and experimental work for both valence shell and L-shell photoionization. Excellent agreement was found for transitions involving the 2Po initial state to all allowed final states for both calculations across a range of photon energies. A number of resonant states have been identified to help analyse and explain the nature of the spectra at photon energies between 250 and 270 eV.
Resumo:
Accurate determination of electron excitation rates for the Fe-peak elements is complicated by the presence of an open 3d-shell in the description of the target ion, which can lead to hundreds of target state energy levels. Furthermore, the low energy scattering region is dominated by series of Rydberg resonances, which require a very fine energy mesh for their delineation. These problems have prompted the development of a suite of parallel R-matrix codes. In this work we report recent applications of these codes to the study of electron impact excitation of Ni III and Ni IV.
Resumo:
A comparison of collision strengths and effective collision strengths has been undertaken for the Cr II ion based on the model of Wasson et al [2010 A & A. 524 A35]. Calculations have been completed using the Breit-Pauli, RMATRX II and DARC suites of codes.
Resumo:
The research presented, investigates the optimal set of operational codes (opcodes) that create a robust indicator of malicious software (malware) and also determines a program’s execution duration for accurate classification of benign and malicious software. The features extracted from the dataset are opcode density histograms, extracted during the program execution. The classifier used is a support vector machine and is configured to select those features to produce the optimal classification of malware over different program run lengths. The findings demonstrate that malware can be detected using dynamic analysis with relatively few opcodes.
Resumo:
O gene ataxin-3 (ATXN3; 14q32.1) codifica uma proteína expressa ubiquamente, envolvida na via ubiquitina-proteassoma e na repressão da transcrição. Grande relevância tem sido dada ao gene ATXN3 após a identificação de uma expansão (CAG)n na sua região codificante, responsável pela ataxia mais comum em todo o mundo, SCA3 ou doença de Machado-Joseph (DMJ). A DMJ é uma doença neurodegenerativa, autossómica dominante, de início tardio. O tamanho do alelo expandido explica apenas uma parte do pleomorfismo da doença, evidenciando a importância do estudo de outros modificadores. Em doenças de poliglutaminas (poliQ), a toxicidade é causada por um ganho de função da proteína expandida; no entanto, a proteína normal parece ser, também, um dos agentes modificadores da patogénese. O gene ATXN3 possui dois parálogos humanos gerados por retrotransposição: ataxin-3 like (ATXN3L) no cromossoma X, e LOC100132280, ainda não caracterizado, no cromossoma 8. Estudos in vitro evidenciaram a capacidade da ATXN3L para clivar cadeias de ubiquitina, sendo o seu domínio proteolítico mais eficiente do que o domínio da ATXN3 parental. O objetivo deste estudo foi explorar a origem e a evolução das retrocópias ATXN3L e LOC100132280 (aqui denominadas ATXN3L1 e ATXN3L2), assim como testar a relevância funcional de ambas através de abordagens evolutivas e funcionais. Deste modo, para estudar a divergência evolutiva dos páralogos do gene ATXN3: 1) analisaram-se as suas filogenias e estimou-se a data de origem dos eventos de retrotransposição; 2) avaliaram-se as pressões seletivas a que têm sido sujeitos os três parálogos, ao longo da evolução dos primatas; e 3) explorou-se a evolução das repetições CAG, localizadas em três contextos genómicos diferentes, provavelmente sujeitos a diferentes pressões seletivas. Finalmente, para o retrogene que conserva uma open reading frame (ORF) intacta, ATXN3L1, analisou-se, in silico, a conservação dos locais e domínios proteicos da putativa proteína. Ademais, para este retrogene, foi estudado o padrão de expressão de mRNA, através da realização de PCR de Transcriptase Reversa, em 16 tecidos humanos. Os resultados obtidos sugerem que dois eventos independentes de retrotransposição estiveram na origem dos retrogenes ATXN3L1 e ATXN3L2, tendo o primeiro ocorrido há cerca de 63 milhões de anos (Ma) e o segundo após a divisão Platirrínios-Catarrínios, há cerca de 35 Ma. Adicionalmente, outras retrocópias foram encontradas em primatas e outros mamíferos, correspondendo, no entanto, a eventos mais recentes e independentes de retrotransposição. A abordagem evolutiva mostrou a existência de algumas constrições selectivas associadas à evolução do gene ATXN3L1, à semelhança do que acontece com ATXN3. Por outro lado, ATXN3L2 adquiriu codões stop prematuros que, muito provavelmente, o tornaram num pseudogene processado. Os resultados da análise de expressão mostraram que o gene ATXN3L1 é transcrito, pelo menos, em testículo humano; no entanto, a optimização final da amplificação específica dos transcriptos ATXN3L1 permitirá confirmar se a expressão se estende a outros tecidos. Relativamente ao mecanismo de mutação inerente à repetição CAG, os dois parálogos mostraram diferentes padrões de evolução: a retrocópia ATXN3L1 é altamente interrompida e pouco polimórfica, enquanto a ATXN3L2 apresenta tratos puros de (CAG)n em algumas espécies e tratos hexanucleotídicos de CGGCAG no homem e no chimpanzé. A recente aquisição da repetição CGGCAG pode ter resultado de uma mutação inicial de CAG para CGG, seguida de instabilidade que proporcionou a expansão dos hexanucleótidos.Estudos futuros poderão ser realizados no sentido de confirmar o padrão de expressão do gene ATXN3L1 e de detetar proteína endógena in vivo. Adicionalmente, a caracterização da proteina ataxina-3 like 1 e dos seus interatores moleculares poderá povidenciar informação acerca da sua relevância no estado normal e patológico.
Resumo:
Por parte da indústria de estampagem tem-se verificado um interesse crescente em simulações numéricas de processos de conformação de chapa, incluindo também métodos de engenharia inversa. Este facto ocorre principalmente porque as técnicas de tentativa-erro, muito usadas no passado, não são mais competitivas a nível económico. O uso de códigos de simulação é, atualmente, uma prática corrente em ambiente industrial, pois os resultados tipicamente obtidos através de códigos com base no Método dos Elementos Finitos (MEF) são bem aceites pelas comunidades industriais e científicas Na tentativa de obter campos de tensão e de deformação precisos, uma análise eficiente com o MEF necessita de dados de entrada corretos, como geometrias, malhas, leis de comportamento não-lineares, carregamentos, leis de atrito, etc.. Com o objetivo de ultrapassar estas dificuldades podem ser considerados os problemas inversos. No trabalho apresentado, os seguintes problemas inversos, em Mecânica computacional, são apresentados e analisados: (i) problemas de identificação de parâmetros, que se referem à determinação de parâmetros de entrada que serão posteriormente usados em modelos constitutivos nas simulações numéricas e (ii) problemas de definição geométrica inicial de chapas e ferramentas, nos quais o objetivo é determinar a forma inicial de uma chapa ou de uma ferramenta tendo em vista a obtenção de uma determinada geometria após um processo de conformação. São introduzidas e implementadas novas estratégias de otimização, as quais conduzem a parâmetros de modelos constitutivos mais precisos. O objetivo destas estratégias é tirar vantagem das potencialidades de cada algoritmo e melhorar a eficiência geral dos métodos clássicos de otimização, os quais são baseados em processos de apenas um estágio. Algoritmos determinísticos, algoritmos inspirados em processos evolucionários ou mesmo a combinação destes dois são usados nas estratégias propostas. Estratégias de cascata, paralelas e híbridas são apresentadas em detalhe, sendo que as estratégias híbridas consistem na combinação de estratégias em cascata e paralelas. São apresentados e analisados dois métodos distintos para a avaliação da função objetivo em processos de identificação de parâmetros. Os métodos considerados são uma análise com um ponto único ou uma análise com elementos finitos. A avaliação com base num único ponto caracteriza uma quantidade infinitesimal de material sujeito a uma determinada história de deformação. Por outro lado, na análise através de elementos finitos, o modelo constitutivo é implementado e considerado para cada ponto de integração. Problemas inversos são apresentados e descritos, como por exemplo, a definição geométrica de chapas e ferramentas. Considerando o caso da otimização da forma inicial de uma chapa metálica a definição da forma inicial de uma chapa para a conformação de um elemento de cárter é considerado como problema em estudo. Ainda neste âmbito, um estudo sobre a influência da definição geométrica inicial da chapa no processo de otimização é efetuado. Este estudo é realizado considerando a formulação de NURBS na definição da face superior da chapa metálica, face cuja geometria será alterada durante o processo de conformação plástica. No caso dos processos de otimização de ferramentas, um processo de forjamento a dois estágios é apresentado. Com o objetivo de obter um cilindro perfeito após o forjamento, dois métodos distintos são considerados. No primeiro, a forma inicial do cilindro é otimizada e no outro a forma da ferramenta do primeiro estágio de conformação é otimizada. Para parametrizar a superfície livre do cilindro são utilizados diferentes métodos. Para a definição da ferramenta são também utilizados diferentes parametrizações. As estratégias de otimização propostas neste trabalho resolvem eficientemente problemas de otimização para a indústria de conformação metálica.