56 resultados para Big five
Resumo:
As the sister group to vertebrates, amphioxus is consistently used as a model of genome evolution for understanding the invertebrate/vertebrate transition. The amphioxus genome has not undergone massive duplications like those in the vertebrates or disruptive rearrangements like in the genome of Ciona, a urochordate, making it an ideal evolutionary model. Transposable elements have been linked to many genomic evolutionary changes including increased genome size, modified gene expression, massive gene rearrangements, and possibly intron evolution. Despite their importance in genome evolution, few previous examples of transposable elements have been identified in amphioxus. We report five novel Miniature Inverted-repeat Transposable Elements (MITEs) identified by an analysis of amphioxus DNA sequence, which we have named LanceleTn-1, LanceleTn-2, LanceleTn-3a, LanceleTn-3b and LanceleTn-4. Several of the LanceleTn elements were identified in the amphioxus ParaHox cluster, and we suggest these have had important implications for the evolution of this highly conserved gene cluster. The estimated high copy numbers of these elements implies that MITEs are probably the most abundant type of mobile element in amphioxus, and are thus likely to have been of fundamental importance in shaping the evolution of the amphioxus genome.
Resumo:
Currently, most operational forecasting models use latitude-longitude grids, whose convergence of meridians towards the poles limits parallel scaling. Quasi-uniform grids might avoid this limitation. Thuburn et al, JCP, 2009 and Ringler et al, JCP, 2010 have developed a method for arbitrarily-structured, orthogonal C-grids (TRiSK), which has many of the desirable properties of the C-grid on latitude-longitude grids but which works on a variety of quasi-uniform grids. Here, five quasi-uniform, orthogonal grids of the sphere are investigated using TRiSK to solve the shallow-water equations. We demonstrate some of the advantages and disadvantages of the hexagonal and triangular icosahedra, a Voronoi-ised cubed sphere, a Voronoi-ised skipped latitude-longitude grid and a grid of kites in comparison to a full latitude-longitude grid. We will show that the hexagonal-icosahedron gives the most accurate results (for least computational cost). All of the grids suffer from spurious computational modes; this is especially true of the kite grid, despite it having exactly twice as many velocity degrees of freedom as height degrees of freedom. However, the computational modes are easiest to control on the hexagonal icosahedron since they consist of vorticity oscillations on the dual grid which can be controlled using a diffusive advection scheme for potential vorticity.
Resumo:
In 2003 the European Commission started using Impact Assessment (IA) as the main empirical basis for its major policy proposals. The aim was to systematically assess ex ante the economic, social and environmental impacts of EU policy proposals. In parallel, research proliferated in search for theoretical grounds for IAs and in an attempt to evaluate empirically the performance of the first sets of IAs produced by the European Commission. This paper combines conceptual and evaluative studies carried out in the first five years of EU IAs. It concludes that the great discrepancy between rationale and practice calls for a different theoretical focus and a higher emphasis on evaluating empirically crucial risk economics aspects of IAs, such as the value of statistical life, price of carbon, the integration of macroeconomic modelling and scenario analysis.
Resumo:
In this article the author discusses participative modelling in system dynamics and issues underlying it. It states that in the heart of system dynamics is the servo-mechanism theory. It argues that it is wrong to see an optimal solution being applied by the empowered parties just because it exhibits self-evident truth and an analysis is not enough to encourage people to do things in different way. It mentions other models including the simulation models used for developing strategy discussions.
Resumo:
Objectives: To examine 397 strains of Salmonella enterica of human and animal origin comprising 35 serotypes for the presence of aadB, aphAI-IAB, aadA1, aadA2, bla(Carb(2)) or pse1, bla(Tem), cat1, cat2, dhfr1, floR, strA, sul1, sul2, tetA(A), tetA(B) and tetA(G) genes, the presence of class 1 integrons and the relationship of resistance genes to integrons and antibiotic resistance. Results: Some strains were resistant to ampicillin (91), chloramphenicol (85), gentamicin (2), kanamycin (14), spectinomycin (81), streptomycin (119), sulfadiazine (127), tetracycline (108) and trimethoprim (45); 219 strains were susceptible to all antibiotics. bla(Carb(2)), floR and tetA(G) genes were found in S. Typhimurium isolates and one strain of S. Emek only. Class 1 integrons were found in S. Emek, Haifa, Heidelberg, Mbandaka, Newport, Ohio, Stanley, Virchow and in Typhimurium, mainly phage types DT104 and U302. These strains were generally multi-resistant to up to seven antibiotics. Resistance to between three and six antibiotics was also associated with class 1 integron-negative strains of S. Binza, Dublin, Enteritidis, Hadar, Manhattan, Mbandaka, Montevideo, Newport, Typhimurium DT193 and Virchow. Conclusion: The results illustrate specificity of some resistance genes to S. Typhimurium or non- S. Typhimurium serotypes and the involvement of both class 1 integron and non-class 1 integron associated multi-resistance in several serotypes. These data also indicate that the bla(Carb(2)), floR and tetA(G) genes reported in the SG1 region of S. Typhimurium DT104, U302 and some other serotypes are still predominantly limited to S. Typhimurium strains.
Resumo:
We are sympathetic with Bentley et al’s attempt to encompass the wisdom of crowds in a generative model, but posit that success at using Big Data will include more sensitive measurements, more and more varied sources of information, as well as build from the indirect information available through technology, from ancillary technical features to data from brain-computer interface.
Resumo:
Suburban areas continue to grow rapidly and are potentially an important land-use category for anthropogenic carbon-dioxide (CO2) emissions. Here eddy covariance techniques are used to obtain ecosystem-scale measurements of CO2 fluxes (FC) from a suburban area of Baltimore, Maryland, USA (2002–2006). These are among the first multi-year measurements of FC in a suburban area. The study area is characterized by low population density (1500 inhabitants km−2) and abundant vegetation (67.4% vegetation land-cover). FC is correlated with photosynthetic active radiation (PAR), soil temperature, and wind direction. Missing hourly FC is gap-filled using empirical relations between FC, PAR, and soil temperature. Diurnal patterns show net CO2 emissions to the atmosphere during winter and net CO2 uptake by the surface during summer daytime hours (summer daily total is −1.25 g C m−2 d−1). Despite the large amount of vegetation the suburban area is a net CO2 source of 361 g C m−2 y−1 on average.
Resumo:
JASMIN is a super-data-cluster designed to provide a high-performance high-volume data analysis environment for the UK environmental science community. Thus far JASMIN has been used primarily by the atmospheric science and earth observation communities, both to support their direct scientific workflow, and the curation of data products in the STFC Centre for Environmental Data Archival (CEDA). Initial JASMIN configuration and first experiences are reported here. Useful improvements in scientific workflow are presented. It is clear from the explosive growth in stored data and use that there was a pent up demand for a suitable big-data analysis environment. This demand is not yet satisfied, in part because JASMIN does not yet have enough compute, the storage is fully allocated, and not all software needs are met. Plans to address these constraints are introduced.
Resumo:
This study examines the feedback practices of 110 EFL teachers from five different countries (Cyprus, France, Korea, Spain, and Thailand), working in secondary school contexts. All provided feedback on the same student essay. The coding scheme developed to analyse the feedback operates on two axes: the stance the teachers assumed when providing feedback, and the focus of their feedback. Most teachers reacted as language teachers, rather than as readers of communication. The teachers overwhelmingly focused on grammar in their feedback and assumed what we called a Provider role, providing the correct forms for the student. A second role, Initiator, was also present, in which teachers indicate errors or issues to the learner but expect the learner to pick this up and work on it. This role was associated with a more even spread of feedback focus, where teachers also provided feedback on other areas, such as lexis, style and discourse.
Resumo:
Owing to continuous advances in the computational power of handheld devices like smartphones and tablet computers, it has become possible to perform Big Data operations including modern data mining processes onboard these small devices. A decade of research has proved the feasibility of what has been termed as Mobile Data Mining, with a focus on one mobile device running data mining processes. However, it is not before 2010 until the authors of this book initiated the Pocket Data Mining (PDM) project exploiting the seamless communication among handheld devices performing data analysis tasks that were infeasible until recently. PDM is the process of collaboratively extracting knowledge from distributed data streams in a mobile computing environment. This book provides the reader with an in-depth treatment on this emerging area of research. Details of techniques used and thorough experimental studies are given. More importantly and exclusive to this book, the authors provide detailed practical guide on the deployment of PDM in the mobile environment. An important extension to the basic implementation of PDM dealing with concept drift is also reported. In the era of Big Data, potential applications of paramount importance offered by PDM in a variety of domains including security, business and telemedicine are discussed.
Resumo:
The term 'big data' has recently emerged to describe a range of technological and commercial trends enabling the storage and analysis of huge amounts of customer data, such as that generated by social networks and mobile devices. Much of the commercial promise of big data is in the ability to generate valuable insights from collecting new types and volumes of data in ways that were not previously economically viable. At the same time a number of questions have been raised about the implications for individual privacy. This paper explores key perspectives underlying the emergence of big data, and considers both the opportunities and ethical challenges raised for market research.
Resumo:
This article reflects on a decade of British counterinsurgency operations. Questioning the idea that lessons have been learnt, the paper challenges the assumptions that are being used to frame future strategic choice. Suggesting that defence engagement is primarily focused on optimising overseas interventions while avoiding a deeper strategic reassessment about whether the UK should be undertaking these sorts of activities, the article calls for a proper debate on Britain's national security interests.