960 resultados para Perfect
Resumo:
Background: Psychotic phenomena appear to form a continuum with normal experience and beliefs, and may build on common emotional interpersonal concerns. Aims: We tested predictions that paranoid ideation is exponentially distributed and hierarchically arranged in the general population, and that persecutory ideas build on more common cognitions of mistrust, interpersonal sensitivity and ideas of reference. Method: Items were chosen from the Structured Clinical Interview for DSM-IV Axis II Disorders (SCID-II) questionnaire and the Psychosis Screening Questionnaire in the second British National Survey of Psychiatric Morbidity (n = 8580), to test a putative hierarchy of paranoid development using confirmatory factor analysis, latent class analysis and factor mixture modelling analysis. Results: Different types of paranoid ideation ranged in frequency from less than 2% to nearly 30%. Total scores on these items followed an almost perfect exponential distribution (r = 0.99). Our four a priori first-order factors were corroborated (interpersonal sensitivity; mistrust; ideas of reference; ideas of persecution). These mapped onto four classes of individual respondents: a rare, severe, persecutory class with high endorsement of all item factors, including persecutory ideation; a quasi-normal class with infrequent endorsement of interpersonal sensitivity, mistrust and ideas of reference, and no ideas of persecution; and two intermediate classes, characterised respectively by relatively high endorsement of items relating to mistrust and to ideas of reference. Conclusions: The paranoia continuum has implications for the aetiology, mechanisms and treatment of psychotic disorders, while confirming the lack of a clear distinction from normal experiences and processes.
Resumo:
The ability of six scanning cloud radar scan strategies to reconstruct cumulus cloud fields for radiation study is assessed. Utilizing snapshots of clean and polluted cloud fields from large eddy simulations, an analysis is undertaken of error in both the liquid water path and monochromatic downwelling surface irradiance at 870 nm of the reconstructed cloud fields. Error introduced by radar sensitivity, choice of radar scan strategy, retrieval of liquid water content (LWC), and reconstruction scheme is explored. Given an in␣nitely sensitive radar and perfect LWC retrieval, domain average surface irradiance biases are typically less than 3 W m␣2 ␣m␣1, corresponding to 5–10% of the cloud radiative effect (CRE). However, when using a realistic radar sensitivity of ␣37.5 dBZ at 1 km, optically thin areas and edges of clouds are dif␣cult to detect due to their low radar re-ectivity; in clean conditions, overestimates are of order 10 W m␣2 ␣m␣1 (~20% of the CRE), but in polluted conditions, where the droplets are smaller, this increases to 10–26 W m␣2 ␣m␣1 (~40–100% of the CRE). Drizzle drops are also problematic; if treated as cloud droplets, reconstructions are poor, leading to large underestimates of 20–46 W m␣2 ␣m␣1 in domain average surface irradiance (~40–80% of the CRE). Nevertheless, a synergistic retrieval approach combining the detailed cloud structure obtained from scanning radar with the droplet-size information and location of cloud base gained from other instruments would potentially make accurate solar radiative transfer calculations in broken cloud possible for the first time.
Resumo:
High-density oligonucleotide (oligo) arrays are a powerful tool for transcript profiling. Arrays based on GeneChip® technology are amongst the most widely used, although GeneChip® arrays are currently available for only a small number of plant and animal species. Thus, we have developed a method to improve the sensitivity of high-density oligonucleotide arrays when applied to heterologous species and tested the method by analysing the transcriptome of Brassica oleracea L., a species for which no GeneChip® array is available, using a GeneChip® array designed for Arabidopsis thaliana (L.) Heynh. Genomic DNA from B. oleracea was labelled and hybridised to the ATH1-121501 GeneChip® array. Arabidopsis thaliana probe-pairs that hybridised to the B. oleracea genomic DNA on the basis of the perfect-match (PM) probe signal were then selected for subsequent B. oleracea transcriptome analysis using a .cel file parser script to generate probe mask files. The transcriptional response of B. oleracea to a mineral nutrient (phosphorus; P) stress was quantified using probe mask files generated for a wide range of gDNA hybridisation intensity thresholds. An example probe mask file generated with a gDNA hybridisation intensity threshold of 400 removed > 68 % of the available PM probes from the analysis but retained >96 % of available A. thaliana probe-sets. Ninety-nine of these genes were then identified as significantly regulated under P stress in B. oleracea, including the homologues of P stress responsive genes in A. thaliana. Increasing the gDNA hybridisation intensity thresholds up to 500 for probe-selection increased the sensitivity of the GeneChip® array to detect regulation of gene expression in B. oleracea under P stress by up to 13-fold. Our open-source software to create probe mask files is freely available http://affymetrix.arabidopsis.info/xspecies/ webcite and may be used to facilitate transcriptomic analyses of a wide range of plant and animal species in the absence of custom arrays.
Resumo:
The electronic structure and oxidation state of atomic Au adsorbed on a perfect CeO2(111) surface have been investigated in detail by means of periodic density functional theory-based calculations, using the LDA+U and GGA+U potentials for a broad range of U values, complemented with calculations employing the HSE06 hybrid functional. In addition, the effects of the lattice parameter a0 and of the starting point for the geometry optimization have also been analyzed. From the present results we suggest that the oxidation state of single Au atoms on CeO2(111) predicted by LDA+U, GGA+U, and HSE06 density functional calculations is not conclusive and that the final picture strongly depends on the method chosen and on the construction of the surface model. In some cases we have been able to locate two well-defined states which are close in energy but with very different electronic structure and local geometries, one with Au fully oxidized and one with neutral Au. The energy difference between the two states is typically within the limits of the accuracy of the present exchange-correlation potentials, and therefore, a clear lowest-energy state cannot be identified. These results suggest the possibility of a dynamic distribution of Au0 and Au+ atomic species at the regular sites of the CeO2(111) surface.
Resumo:
In projections of twenty-first century climate, Arctic sea ice declines and at the same time exhibits strong interannual anomalies. Here, we investigate the potential to predict these strong sea-ice anomalies under a perfect-model assumption, using the Max-Planck-Institute Earth System Model in the same setup as in the Coupled Model Intercomparison Project Phase 5 (CMIP5). We study two cases of strong negative sea-ice anomalies: a 5-year-long anomaly for present-day conditions, and a 10-year-long anomaly for conditions projected for the middle of the twenty-first century. We treat these anomalies in the CMIP5 projections as the truth, and use exactly the same model configuration for predictions of this synthetic truth. We start ensemble predictions at different times during the anomalies, considering lagged-perfect and sea-ice-assimilated initial conditions. We find that the onset and amplitude of the interannual anomalies are not predictable. However, the further deepening of the anomaly can be predicted for typically 1 year lead time if predictions start after the onset but before the maximal amplitude of the anomaly. The magnitude of an extremely low summer sea-ice minimum is hard to predict: the skill of the prediction ensemble is not better than a damped-persistence forecast for lead times of more than a few months, and is not better than a climatology forecast for lead times of two or more years. Predictions of the present-day anomaly are more skillful than predictions of the mid-century anomaly. Predictions using sea-ice-assimilated initial conditions are competitive with those using lagged-perfect initial conditions for lead times of a year or less, but yield degraded skill for longer lead times. The results presented here suggest that there is limited prospect of predicting the large interannual sea-ice anomalies expected to occur throughout the twenty-first century.
Resumo:
Numerical climate models constitute the best available tools to tackle the problem of climate prediction. Two assumptions lie at the heart of their suitability: (1) a climate attractor exists, and (2) the numerical climate model's attractor lies on the actual climate attractor, or at least on the projection of the climate attractor on the model's phase space. In this contribution, the Lorenz '63 system is used both as a prototype system and as an imperfect model to investigate the implications of the second assumption. By comparing results drawn from the Lorenz '63 system and from numerical weather and climate models, the implications of using imperfect models for the prediction of weather and climate are discussed. It is shown that the imperfect model's orbit and the system's orbit are essentially different, purely due to model error and not to sensitivity to initial conditions. Furthermore, if a model is a perfect model, then the attractor, reconstructed by sampling a collection of initialised model orbits (forecast orbits), will be invariant to forecast lead time. This conclusion provides an alternative method for the assessment of climate models.
Resumo:
The Plaut, McClelland, Seidenberg and Patterson (1996) connectionist model of reading was evaluated at two points early in its training against reading data collected from British children on two occasions during their first year of literacy instruction. First, the network’s non-word reading was poor relative to word reading when compared with the children. Second, the network made more non-lexical than lexical errors, the opposite pattern to the children. Three adaptations were made to the training of the network to bring it closer to the learning environment of a child: an incremental training regime was adopted; the network was trained on grapheme– phoneme correspondences; and a training corpus based on words found in children’s early reading materials was used. The modifications caused a sharp improvement in non-word reading, relative to word reading, resulting in a near perfect match to the children’s data on this measure. The modified network, however, continued to make predominantly non-lexical errors, although evidence from a small-scale implementation of the full triangle framework suggests that this limitation stems from the lack of a semantic pathway. Taken together, these results suggest that, when properly trained, connectionist models of word reading can offer insights into key aspects of reading development in children.
Resumo:
Energy storage is a potential alternative to conventional network reinforcementof the low voltage (LV) distribution network to ensure the grid’s infrastructure remainswithin its operating constraints. This paper presents a study on the control of such storagedevices, owned by distribution network operators. A deterministic model predictive control (MPC) controller and a stochastic receding horizon controller (SRHC) are presented, wherethe objective is to achieve the greatest peak reduction in demand, for a given storagedevice specification, taking into account the high level of uncertainty in the prediction of LV demand. The algorithms presented in this paper are compared to a standard set-pointcontroller and bench marked against a control algorithm with a perfect forecast. A specificcase study, using storage on the LV network, is presented, and the results of each algorithmare compared. A comprehensive analysis is then carried out simulating a large number of LV networks of varying numbers of households. The results show that the performance of each algorithm is dependent on the number of aggregated households. However, on a typical aggregation, the novel SRHC algorithm presented in this paper is shown to outperform each of the comparable storage control techniques.
Resumo:
Distributed generation plays a key role in reducing CO2 emissions and losses in transmission of power. However, due to the nature of renewable resources, distributed generation requires suitable control strategies to assure reliability and optimality for the grid. Multi-agent systems are perfect candidates for providing distributed control of distributed generation stations as well as providing reliability and flexibility for the grid integration. The proposed multi-agent energy management system consists of single-type agents who control one or more gird entities, which are represented as generic sub-agent elements. The agent applies one control algorithm across all elements and uses a cost function to evaluate the suitability of the element as a supplier. The behavior set by the agent's user defines which parameters of an element have greater weight in the cost function, which allows the user to specify the preference on suppliers dynamically. This study shows the ability of the multi-agent energy management system to select suppliers according to the selection behavior given by the user. The optimality of the supplier for the required demand is ensured by the cost function based on the parameters of the element.
Resumo:
This article seeks to explore the absence of the body in the depiction of dying women in a selection of seventeenth-century diaries. It considers the cultural forces that made this absence inevitable, and the means by which the physical body was replaced in death by a spiritual presence. The elevation of a dying woman from physical carer to spiritual nurturer in the days before death ensured that gender codes were not broken. The centrality of the body of the dying woman, within a female circle of care and support, was paradoxically juxtaposed with an effacement of the body in descriptions of a good death. In death, a woman might achieve the stillness, silence and compliance so essential to perfect early modern womanhood, and retrospective diary entries can achieve this ideal by replacing the body with images that deflect from the essential physicality of the woman.
Resumo:
Realistic representation of sea ice in ocean models involves the use of a non-linear free-surface, a real freshwater flux and observance of requisite conservation laws. We show here that these properties can be achieved in practice through use of a rescaled vertical coordinate ‘‘z*” in z-coordinate models that allows one to follow undulations in the free-surface under sea ice loading. In particular, the adoption of "z*" avoids the difficult issue of vanishing levels under thick ice. Details of the implementation within MITgcm are provided. A high resolution global ocean sea ice simulation illustrates the robustness of the z* formulation and reveals a source of oceanic variability associated with sea ice dynamics and ice-loading effects. The use of the z* coordinate allows one to achieve perfect conservation of fresh water, heat and salt, as shown in extended integration of coupled ocean sea ice atmospheric model.
Resumo:
This paper discusses ECG signal classification after parametrizing the ECG waveforms in the wavelet domain. Signal decomposition using perfect reconstruction quadrature mirror filter banks can provide a very parsimonious representation of ECG signals. In the current work, the filter parameters are adjusted by a numerical optimization algorithm in order to minimize a cost function associated to the filter cut-off sharpness. The goal consists of achieving a better compromise between frequency selectivity and time resolution at each decomposition level than standard orthogonal filter banks such as those of the Daubechies and Coiflet families. Our aim is to optimally decompose the signals in the wavelet domain so that they can be subsequently used as inputs for training to a neural network classifier.
Resumo:
In e-health intervention studies, there are concerns about the reliability of internet-based, self-reported (SR) data and about the potential for identity fraud. This study introduced and tested a novel procedure for assessing the validity of internet-based, SR identity and validated anthropometric and demographic data via measurements performed face-to-face in a validation study (VS). Participants (n = 140) from seven European countries, participating in the Food4Me intervention study which aimed to test the efficacy of personalised nutrition approaches delivered via the internet, were invited to take part in the VS. Participants visited a research centre in each country within 2 weeks of providing SR data via the internet. Participants received detailed instructions on how to perform each measurement. Individual’s identity was checked visually and by repeated collection and analysis of buccal cell DNA for 33 genetic variants. Validation of identity using genomic information showed perfect concordance between SR and VS. Similar results were found for demographic data (age and sex verification). We observed strong intra-class correlation coefficients between SR and VS for anthropometric data (height 0.990, weight 0.994 and BMI 0.983). However, internet-based SR weight was under-reported (Δ −0.70 kg [−3.6 to 2.1], p < 0.0001) and, therefore, BMI was lower for SR data (Δ −0.29 kg m−2 [−1.5 to 1.0], p < 0.0001). BMI classification was correct in 93 % of cases. We demonstrate the utility of genotype information for detection of possible identity fraud in e-health studies and confirm the reliability of internet-based, SR anthropometric and demographic data collected in the Food4Me study.
Resumo:
We propose a bargaining process supergame over the strategies to play in a non-cooperative game. The agreement reached by players at the end of the bargaining process is the strategy profile that they will play in the original non-cooperative game. We analyze the subgame perfect equilibria of this supergame, and its implications on the original game. We discuss existence, uniqueness, and efficiency of the agreement reachable through this bargaining process. We illustrate the consequences of applying such a process to several common two-player non-cooperative games: the Prisoner’s Dilemma, the Hawk-Dove Game, the Trust Game, and the Ultimatum Game. In each of them, the proposed bargaining process gives rise to Pareto-efficient agreements that are typically different from the Nash equilibrium of the original games.
Resumo:
Soils play a pivotal role in major global biogeochemical cycles (carbon, nutrient and water), while hosting the largest diversity of organisms on land. Because of this, soils deliver fundamental ecosystem services, and management to change a soil process in support of one ecosystem service can either provide co-benefits to other services or can result in trade-offs. In this critical review, we report the state-of-the-art understanding concerning the biogeochemical cycles and biodiversity in soil, and relate these to the provisioning, regulating, supporting and cultural ecosystem services which they underpin. We then outline key knowledge gaps and research challenges, before providing recommendations for management activities to support the continued delivery of ecosystem services from soils. We conclude that although there are knowledge gaps that require further research, enough is known to start improving soils globally. The main challenge is in finding ways to share knowledge with soil managers and policy-makers, so that best-practice management can be implemented. A key element of this knowledge sharing must be in raising awareness of the multiple ecosystem services underpinned by soils, and the natural capital they provide. The International Year of Soils in 2015 presents the perfect opportunity to begin a step-change in how we harness scientific knowledge to bring about more sustainable use of soils for a secure global society.