923 resultados para THE 30s GENERATION
Resumo:
This study details validation of two separate multiplex STR systems for use in paternity investigations. These are the Second Generation Multiplex (SGM) developed by the UK Forensic Science Service and the PowerPlex 1 multiplex commercially available from Promega Inc. (Madison, WI, USA). These multiplexes contain 12 different STR systems (two are duplicated in the two systems). Population databases from Caucasian, Asian and Afro-Caribbean populations have been compiled for all loci. In all but two of the 36 STR/ethnic group combinations, no evidence was obtained to indicate inconsistency with Hardy-Weinberg (HW) proportions. Empirical and theoretical approaches have been taken to validate these systems for paternity testing. Samples from 121 cases of disputed paternity were analysed using established Single Locus Probe (SLP) tests currently in use, and also using the two multiplex STR systems. Results of all three test systems were compared and no non-conformities in the conclusions were observed, although four examples of apparent germ line mutations in the STR systems were identified. The data was analysed to give information on expected paternity indices and exclusion rates for these STR systems. The 12 systems combined comprise a highly discriminating test suitable for paternity testing. 99.96% of non-fathers are excluded from paternity on two or more STR systems. Where no exclusion is found, Paternity Index (PI) values of > 10,000 are expected in > 96% of cases.
Resumo:
It is generally assumed that the variability of neuronal morphology has an important effect on both the connectivity and the activity of the nervous system, but this effect has not been thoroughly investigated. Neuroanatomical archives represent a crucial tool to explore structure–function relationships in the brain. We are developing computational tools to describe, generate, store and render large sets of three–dimensional neuronal structures in a format that is compact, quantitative, accurate and readily accessible to the neuroscientist. Single–cell neuroanatomy can be characterized quantitatively at several levels. In computer–aided neuronal tracing files, a dendritic tree is described as a series of cylinders, each represented by diameter, spatial coordinates and the connectivity to other cylinders in the tree. This ‘Cartesian’ description constitutes a completely accurate mapping of dendritic morphology but it bears little intuitive information for the neuroscientist. In contrast, a classical neuroanatomical analysis characterizes neuronal dendrites on the basis of the statistical distributions of morphological parameters, e.g. maximum branching order or bifurcation asymmetry. This description is intuitively more accessible, but it only yields information on the collective anatomy of a group of dendrites, i.e. it is not complete enough to provide a precise ‘blueprint’ of the original data. We are adopting a third, intermediate level of description, which consists of the algorithmic generation of neuronal structures within a certain morphological class based on a set of ‘fundamental’, measured parameters. This description is as intuitive as a classical neuroanatomical analysis (parameters have an intuitive interpretation), and as complete as a Cartesian file (the algorithms generate and display complete neurons). The advantages of the algorithmic description of neuronal structure are immense. If an algorithm can measure the values of a handful of parameters from an experimental database and generate virtual neurons whose anatomy is statistically indistinguishable from that of their real counterparts, a great deal of data compression and amplification can be achieved. Data compression results from the quantitative and complete description of thousands of neurons with a handful of statistical distributions of parameters. Data amplification is possible because, from a set of experimental neurons, many more virtual analogues can be generated. This approach could allow one, in principle, to create and store a neuroanatomical database containing data for an entire human brain in a personal computer. We are using two programs, L–NEURON and ARBORVITAE, to investigate systematically the potential of several different algorithms for the generation of virtual neurons. Using these programs, we have generated anatomically plausible virtual neurons for several morphological classes, including guinea pig cerebellar Purkinje cells and cat spinal cord motor neurons. These virtual neurons are stored in an online electronic archive of dendritic morphology. This process highlights the potential and the limitations of the ‘computational neuroanatomy’ strategy for neuroscience databases.
Resumo:
For farmers, the decision as to when to retire is probably one of the hardest that they will have to face during their working lives. The business of farming brings special circumstances which means that retirement is more often a process than a definitive action. This paper seeks firstly to clarify those special circumstances, and then, by means of flow charts, to identify the key decisions which must be faced if the retirement is to be successful. The practise of handing on the farmland and the other business assets to the next generation are regarded as separate but interrelated stages in the process of retirement, both having legal, financial and human consequences which are considered. By way of conclusion, the parameters for a successful retirement are considered, both from the standpoint of the retirer and of the successor.
Resumo:
A wild house mouse (Mus domesticus) population originally trapped near Reading, Berkshire, United Kingdom, and maintained as a colony in the laboratory, was subjected to the discriminating feeding period of the warfarin resistance test, as used by Wallace and MacSwiney (1976) and derived from the work of Rowe and Redfern (1964). Eighty percent of this heterogeneous population survived the resistance-test. A similar proportion of the population was found to survive the normally lethal dose of bromadiolone administered by oral gavage. The majority of this population of mice were classified as "warfarin-resistant" and "bromadiolone-resistant." The dose of 10mg.kg-1 of bromadiolone administered by oral gavage appeared to give good discrimination between susceptible and resistant individuals. The results of breeding tests indicate a single dominant gene that confers both "warfarin-resistance" and "bromadiolone-resistance", with complete expression of the resistance genotype in both males and females. Individual mice were classified as to genotype by back-crossing to a homozygous-susceptible strain, and resistance-testing the F1 generation. Separate strains of homozygous-resistant and homozygous-susceptible house mice are now being established.
Resumo:
This paper is motivated to investigate the often neglected payoff to investments in the health of girls and women in terms of next generation outcomes. This paper investigates the intergenerational persistence of health across time and region as well as across the distribution of maternal health. It uses comparable microdata on as many as 2.24 million children born of about 0.6 million mothers in 38 developing countries in the 31 year period, 1970–2000. Mother's health is indicated by her height, BMI and anemia status. Child health is indicated by mortality risk and anthropometric failure. We find a positive relationship between maternal and child health across indicators and highlight non-linearities in these relationships. The results suggest that both contemporary and childhood health of the mother matter and that the benefits to the next generation are likely to be persistent. Averaging across the sample, persistence shows a considerable decline over time. Disaggregation shows that the decline is only significant in Latin America. Persistence has remained largely constant in Asia and has risen in Africa. The paper provides the first cross-country estimates of the intergenerational persistence in health and the first estimates of trends.
Resumo:
The redistribution of a finite amount of martian surface dust during global dust storms and in the intervening periods has been modelled in a dust lifting version of the UK Mars General Circulation Model. When using a constant, uniform threshold in the model’s wind stress lifting parameterisation and assuming an unlimited supply of surface dust, multiannual simulations displayed some variability in dust lifting activity from year to year, arising from internal variability manifested in surface wind stress, but dust storms were limited in size and formed within a relatively short seasonal window. Lifting thresholds were then allowed to vary at each model gridpoint, dependent on the rates of emission or deposition of dust. This enhanced interannual variability in dust storm magnitude and timing, such that model storms covered most of the observed ranges in size and initiation date within a single multiannual simulation. Peak storm magnitude in a given year was primarily determined by the availability of surface dust at a number of key sites in the southern hemisphere. The observed global dust storm (GDS) frequency of roughly one in every 3 years was approximately reproduced, but the model failed to generate these GDSs spontaneously in the southern hemisphere, where they have typically been observed to initiate. After several years of simulation, the surface threshold field—a proxy for net change in surface dust density—showed good qualitative agreement with the observed pattern of martian surface dust cover. The model produced a net northward cross-equatorial dust mass flux, which necessitated the addition of an artificial threshold decrease rate in order to allow the continued generation of dust storms over the course of a multiannual simulation. At standard model resolution, for the southward mass flux due to cross-equatorial flushing storms to offset the northward flux due to GDSs on a timescale of ∼3 years would require an increase in the former by a factor of 3–4. Results at higher model resolution and uncertainties in dust vertical profiles mean that quasi-periodic redistribution of dust on such a timescale nevertheless appears to be a plausible explanation for the observed GDS frequency.
Resumo:
Anchored in the service-dominant logic and service innovation literature, this study investigates the drivers of employee generation of ideas for service improvement (GISI). Employee GISI focuses on customer needs and providing the exact service wanted by customers. GISI should enhance competitive advantage and organizational success (cf. Berry et al. 2006; Wang and Netemeyer 2004). Despite its importance, there is little research on the idea generation stage of the service development process (Chai, Zhang, and Tan 2005). This study contributes to the service field by providing the first empirical evaluation of the drivers of GISI. It also investigates a new explanatory determinant of reading of customer needs, namely, perceived organizational support (POS), and an outcome of POS, in the form of emotional exhaustion. Results show that the major driver of GISI is reading of customer needs by employees followed by affective organizational commitment and job satisfaction. This research provides several new and important insights for service management practice by suggesting that special care should be put into selecting and recruiting employees who have the ability to read customer needs. Additionally, organizations should invest in creating work environments that encourage and reward the flow of ideas for service improvement
Resumo:
Human ICT implants, such as RFID implants, cochlear implants, cardiac pacemakers, Deep Brain Stimulation, bionic limbs connected to the nervous system, and networked cognitive prostheses, are becoming increasingly complex. With ever-growing data processing functionalities in these implants, privacy and security become vital concerns. Electronic attacks on human ICT implants can cause significant harm, both to implant subjects and to their environment. This paper explores the vulnerabilities which human implants pose to crime victimisation in light of recent technological developments, and analyses how the law can deal with emerging challenges of what may well become the next generation of cybercrime: attacks targeted at technology implanted in the human body. After a state-of-the-art description of relevant types of human implants and a discussion how these implants challenge existing perceptions of the human body, we describe how various modes of attacks, such as sniffing, hacking, data interference, and denial of service, can be committed against implants. Subsequently, we analyse how these attacks can be assessed under current substantive and procedural criminal law, drawing on examples from UK and Dutch law. The possibilities and limitations of cybercrime provisions (eg, unlawful access, system interference) and bodily integrity provisions (eg, battery, assault, causing bodily harm) to deal with human-implant attacks are analysed. Based on this assessment, the paper concludes that attacks on human implants are not only a new generation in the evolution of cybercrime, but also raise fundamental questions on how criminal law conceives of attacks. Traditional distinctions between physical and non-physical modes of attack, between human bodies and things, between exterior and interior of the body need to be re-interpreted in light of developments in human implants. As the human body and technology become increasingly intertwined, cybercrime legislation and body-integrity crime legislation will also become intertwined, posing a new puzzle that legislators and practitioners will sooner or later have to solve.
Resumo:
1 The recent increase in planting of selected willow clones as energy crops for biomass production has resulted in a need to understand the relationship between commonly grown, clonally propagated genotypes and their pests. 2 For the first time, we present a study of the interactions of six willow clones and a previously unconsidered pest, the giant willow aphid Tuberolachnus salignus. 3 Tuberolachnus salignus alatae displayed no preference between the clones, but there was genetic variation in resistance between the clones; Q83 was the most resistant and led to the lowest reproductive performance in the aphid 4 Maternal effects buffered changes in aphid performance. On four tested willow clones fecundity of first generation aphids on the new host clone was intermediate to that of the second generation and that of the clone used to maintain the aphids in culture. 5 In the field, patterns of aphid infestation were highly variable between years, with the duration of attack being up to four times longer in 1999. In both years there was a significant effect of willow clone on the intensity of infestation. However, whereas Orm had the lowest intensity of infestation in the first year, Dasyclados supported a lower population level than other monitored clones in the second year.
Resumo:
Runoff generation processes and pathways vary widely between catchments. Credible simulations of solute and pollutant transport in surface waters are dependent on models which facilitate appropriate, catchment-specific representations of perceptual models of the runoff generation process. Here, we present a flexible, semi-distributed landscape-scale rainfall-runoff modelling toolkit suitable for simulating a broad range of user-specified perceptual models of runoff generation and stream flow occurring in different climatic regions and landscape types. PERSiST (the Precipitation, Evapotranspiration and Runoff Simulator for Solute Transport) is designed for simulating present-day hydrology; projecting possible future effects of climate or land use change on runoff and catchment water storage; and generating hydrologic inputs for the Integrated Catchments (INCA) family of models. PERSiST has limited data requirements and is calibrated using observed time series of precipitation, air temperature and runoff at one or more points in a river network. Here, we apply PERSiST to the river Thames in the UK and describe a Monte Carlo tool for model calibration, sensitivity and uncertainty analysis
Resumo:
Understanding the sources of systematic errors in climate models is challenging because of coupled feedbacks and errors compensation. The developing seamless approach proposes that the identification and the correction of short term climate model errors have the potential to improve the modeled climate on longer time scales. In previous studies, initialised atmospheric simulations of a few days have been used to compare fast physics processes (convection, cloud processes) among models. The present study explores how initialised seasonal to decadal hindcasts (re-forecasts) relate transient week-to-month errors of the ocean and atmospheric components to the coupled model long-term pervasive SST errors. A protocol is designed to attribute the SST biases to the source processes. It includes five steps: (1) identify and describe biases in a coupled stabilized simulation, (2) determine the time scale of the advent of the bias and its propagation, (3) find the geographical origin of the bias, (4) evaluate the degree of coupling in the development of the bias, (5) find the field responsible for the bias. This strategy has been implemented with a set of experiments based on the initial adjustment of initialised simulations and exploring various degrees of coupling. In particular, hindcasts give the time scale of biases advent, regionally restored experiments show the geographical origin and ocean-only simulations isolate the field responsible for the bias and evaluate the degree of coupling in the bias development. This strategy is applied to four prominent SST biases of the IPSLCM5A-LR coupled model in the tropical Pacific, that are largely shared by other coupled models, including the Southeast Pacific warm bias and the equatorial cold tongue bias. Using the proposed protocol, we demonstrate that the East Pacific warm bias appears in a few months and is caused by a lack of upwelling due to too weak meridional coastal winds off Peru. The cold equatorial bias, which surprisingly takes 30 years to develop, is the result of an equatorward advection of midlatitude cold SST errors. Despite large development efforts, the current generation of coupled models shows only little improvement. The strategy proposed in this study is a further step to move from the current random ad hoc approach, to a bias-targeted, priority setting, systematic model development approach.
Resumo:
The genetics of the stipule spot pigmentation (SSP) in faba bean (Vicia faba L.) was studied using four inbred lines, of which Disco/2 was zero-tannin (zt2) with colourless stipule spots, ILB938/2 was normal-tannin (ZT2) with colourless stipule spots, and both Aurora/2 and Mélodie/2 were ZT2 with coloured stipule spots. Crosses Mélodie/2 × ILB 938/2, Mélodie/2 × Disco/2, ILB 938/2 × Aurora/2 and ILB 938/2 × Disco/2 (A, B, C and D, respectively) were prepared, along with reciprocals and backcrosses, and advanced through single-seed descent. All F1 hybrid plants had pigmented stipule spots, and in the F2 generation, the segregation ratio fit 3 coloured:1 colourless in crosses A, B and C and 9:7 in cross D. In the F3 generation, the ratio fit 5:3 in crosses A and C and 25:39 in cross D, and in the F4 generation, 9:7 in cross A. SSP was linked to the zero-tannin characteristics (white flower) only in cross B. The results show that coloured stipule spot is dominant to colourless and that colouration is determined by two unlinked complementary recessive genes. We propose the symbols ssp2 for the gene associated with zt2 in Disco/2 and ssp1 for the gene not associated with tannin content in ILB938/2. The novel ssp1 locus was mapped at F5 in cross ‘A’ using Medicago truncatula-derived single-nucleotide polymorphism and was on chromosome 1 of faba bean, in a well-conserved region of M. truncatula chromosome 5 containing some candidate Myb and basic helix–loop–helix transcription factor genes.
Resumo:
Epigenetic modification of the genome via cytosine methylation is a dynamic process that responds to changes in the growing environment. This modification can also be heritable. The combination of both properties means that there is the potential for the life experiences of the parental generation to modify the methylation profiles of their offspring and so potentially to ‘pre-condition’ them to better accommodate abiotic conditions encountered by their parents. We recently identified high vapor pressure deficit (vpd)-induced DNA methylation at two gene loci in the stomatal development pathway and an associated reduction in leaf stomatal frequency.1 Here, we test whether this epigenetic modification pre-conditioned parents and their offspring to the more severe water stress of periodic drought. We found that three generations of high vpd-grown plants were better able to withstand periodic drought stress over two generations. This resistance was not directly associated with de novo methylation of the target stomata genes, but was associated with the cmt3 mutant’s inability to maintain asymmetric sequence context methylation. If our finding applies widely, it could have significant implications for evolutionary biology and breeding for stressful environments.
Resumo:
Background: Dietary assessment methods are important tools for nutrition research. Online dietary assessment tools have the potential to become invaluable methods of assessing dietary intake because, compared with traditional methods, they have many advantages including the automatic storage of input data and the immediate generation of nutritional outputs. Objective: The aim of this study was to develop an online food frequency questionnaire (FFQ) for dietary data collection in the “Food4Me” study and to compare this with the validated European Prospective Investigation of Cancer (EPIC) Norfolk printed FFQ. Methods: The Food4Me FFQ used in this analysis was developed to consist of 157 food items. Standardized color photographs were incorporated in the development of the Food4Me FFQ to facilitate accurate quantification of the portion size of each food item. Participants were recruited in two centers (Dublin, Ireland and Reading, United Kingdom) and each received the online Food4Me FFQ and the printed EPIC-Norfolk FFQ in random order. Participants completed the Food4Me FFQ online and, for most food items, participants were requested to choose their usual serving size among seven possibilities from a range of portion size pictures. The level of agreement between the two methods was evaluated for both nutrient and food group intakes using the Bland and Altman method and classification into quartiles of daily intake. Correlations were calculated for nutrient and food group intakes. Results: A total of 113 participants were recruited with a mean age of 30 (SD 10) years (40.7% male, 46/113; 59.3%, 67/113 female). Cross-classification into exact plus adjacent quartiles ranged from 77% to 97% at the nutrient level and 77% to 99% at the food group level. Agreement at the nutrient level was highest for alcohol (97%) and lowest for percent energy from polyunsaturated fatty acids (77%). Crude unadjusted correlations for nutrients ranged between .43 and .86. Agreement at the food group level was highest for “other fruits” (eg, apples, pears, oranges) and lowest for “cakes, pastries, and buns”. For food groups, correlations ranged between .41 and .90. Conclusions: The results demonstrate that the online Food4Me FFQ has good agreement with the validated printed EPIC-Norfolk FFQ for assessing both nutrient and food group intakes, rendering it a useful tool for ranking individuals based on nutrient and food group intakes.