982 resultados para Side information
Resumo:
Intelligence (IQ) can be seen as the efficiency of mental processes or cognition, as can basic information processing (IP) tasks like those used in our ongoing Memory, Attention and Problem Solving (MAPS) study. Measures of IQ and IP are correlated and both have a genetic component, so we are studying how the genetic variance in IQ is related to the genetic variance in IP. We measured intelligence with five subscales of the Multidimensional Aptitude Battery (MAB). The IP tasks included four variants of choice reaction time (CRT) and a visual inspection time (IT). The influence of genetic factors on the variances in each of the IQ, IP, and IT tasks was investigated in 250 identical and nonidentical twin pairs aged 16 years. For a subset of 50 pairs we have test–retest data that allow us to estimate the stability of the measures. MX was used for a multivariate genetic analysis that addresses whether the variance in IQ and IP measures is possibly mediated by common genetic factors. Analyses that show the modeled genetic and environmental influences on these measures of cognitive efficiency will be presented and their relevance to ideas on intelligence will be discussed.
Resumo:
This paper presents an agent-based approach to modelling individual driver behaviour under the influence of real-time traffic information. The driver behaviour models developed in this study are based on a behavioural survey of drivers which was conducted on a congested commuting corridor in Brisbane, Australia. Commuters' responses to travel information were analysed and a number of discrete choice models were developed to determine the factors influencing drivers' behaviour and their propensity to change route and adjust travel patterns. Based on the results obtained from the behavioural survey, the agent behaviour parameters which define driver characteristics, knowledge and preferences were identified and their values determined. A case study implementing a simple agent-based route choice decision model within a microscopic traffic simulation tool is also presented. Driver-vehicle units (DVUs) were modelled as autonomous software components that can each be assigned a set of goals to achieve and a database of knowledge comprising certain beliefs, intentions and preferences concerning the driving task. Each DVU provided route choice decision-making capabilities, based on perception of its environment, that were similar to the described intentions of the driver it represented. The case study clearly demonstrated the feasibility of the approach and the potential to develop more complex driver behavioural dynamics based on the belief-desire-intention agent architecture. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
This study examined whether people born in other countries had higher rates of death and hospitalization due to road crashes than people born in Australia. Data on deaths that occurred in the whole of Australia between 1994 and 1997 and hospitalizations that occurred in the state of New South Wales, Australia, between I July 1995 and 30 June 1997 due to road crashes were analyzed. The rates of death and hospitalization, adjusted for age and area of residence, were calculated using population data from the 1996 Australian census. The study categorized people born in other countries according to the language (English speaking, non-English speaking) and the road convention (left-hand side, right-hand side) of their country of birth. Australia has the left-hand side driving convention. The study found that drivers born in other countries had rates of death or hospitalization due to road trauma equal to or below those of Australian born drivers. In contrast, pedestrians born in other countries, especially older pedestrians had higher rates of death and hospitalization due to road crashes. Pedestrians aged 60 years or more born in non-English speaking countries where traffic travels on the right-hand side of the road had risks about twice those of Australian born pedestrians in the same age group. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
The general objective of this work was to study the contribution of the ERP for the quality of the managerial accounting information, through the perception of managers of large sized Brazilian companies. The initial principle was that, presently, we live in an enterprise reality characterized by global and competitive worldwide scenery where the information about the enterprise performance and the evaluation of the intangible assets are necessary conditions for the survival, of the companies. The research of the exploratory type is based on a sample of 37 managers of large sized-Brazilian companies. The analysis of the data treated by means of the qualitative method showed that the great majority of the companies of the sample (86%) possess an ERP implanted. It also showed that this system is used in combination with other applicative software. The managers, in its majority, were also satisfied with the information generated in relation to the dimensions Time and Content. However, with regard to the qualitative nature of the information, the ERP made some analysis possible when the Balanced Scorecard was adopted, but information able to provide an estimate of the investments carried through in the intangible assets was not obtained. These results Suggest that in these companies ERP systems are not adequate to support strategic decisions.
Resumo:
Managing a variable demand scenario is particularly challenging on services organizations because services companies usually have a major part of fixed costs. The article studies how a services organization manages its demand variability and its relation with the organization`s profitability. Moreover, the study searched for alternatives used to reduce the demand variability`s impact on the profitability of the company. The research was based on a case study with a Brazilian services provider on information technology business. The study suggests that alternatives like using outsourced employees to cover demand peaks may bring benefits only on short term, reducing the profitability of the company on long term: Some options are revealed, like the internationalization of employees and the investment on developing its own workforce.
Resumo:
Background: A major goal in the post-genomic era is to identify and characterise disease susceptibility genes and to apply this knowledge to disease prevention and treatment. Rodents and humans have remarkably similar genomes and share closely related biochemical, physiological and pathological pathways. In this work we utilised the latest information on the mouse transcriptome as revealed by the RIKEN FANTOM2 project to identify novel human disease-related candidate genes. We define a new term patholog to mean a homolog of a human disease-related gene encoding a product ( transcript, anti-sense or protein) potentially relevant to disease. Rather than just focus on Mendelian inheritance, we applied the analysis to all potential pathologs regardless of their inheritance pattern. Results: Bioinformatic analysis and human curation of 60,770 RIKEN full-length mouse cDNA clones produced 2,578 sequences that showed similarity ( 70 - 85% identity) to known human-disease genes. Using a newly developed biological information extraction and annotation tool ( FACTS) in parallel with human expert analysis of 17,051 MEDLINE scientific abstracts we identified 182 novel potential pathologs. Of these, 36 were identified by computational tools only, 49 by human expert analysis only and 97 by both methods. These pathologs were related to neoplastic ( 53%), hereditary ( 24%), immunological ( 5%), cardio-vascular (4%), or other (14%), disorders. Conclusions: Large scale genome projects continue to produce a vast amount of data with potential application to the study of human disease. For this potential to be realised we need intelligent strategies for data categorisation and the ability to link sequence data with relevant literature. This paper demonstrates the power of combining human expert annotation with FACTS, a newly developed bioinformatics tool, to identify novel pathologs from within large-scale mouse transcript datasets.
Resumo:
Major requirements for performance of liver biopsy (LB) are the benefits for the patient and the impossibility of having the same information by less invasive procedures. In the last two decades physicians have faced the difficult task of convincing a patient positive for hepatitis C, with minimal clinical or laboratory alterations to be submitted to LB in order to evaluate the status of the disease for therapeutic management. The characteristics of the needle used for percutaneous LB interferes with the accuracy of diagnosis. In chronic hepatitis C (CHC), validity is achieved with liver fragments about 25mm in length containing more than 10 portal tracts. Morbidity due to LB is mainly related to bleeding but death is very rare. Severe complications are also uncommon, increasing with number of passes and decreasing with experience of operator and ultrasound guidance. Although CHC is a diffuse disease, the various areas of the liver may not be equally affected and sampling errors are possible. Another potential limitation of LB is the discordance between pathologists in its interpretation. To replace LB, many panels of surrogate markers have been described, aiming to identify extent of fibrosis and inflammation. All of them have used LB as their ""gold standard"". Liver biopsy continues to be the most reliable method to evaluate the possibility of therapy for CHC. Universal treatment of all patients with diagnosis of CHC would be ideal. But, there are mainly three drawbacks. Overall efficacy is as low as 50%, side effects are common and may be severe and treatment is prolonged and expensive. The acceptability of the biopsy by the patient is highly dependent on the physician`s conviction of its usefulness.
Resumo:
Three experiments examined the hypothesis that people show consistency in motivated social cognitive processing across self-serving domains. Consistent with this hypothesis, Experiment 1 revealed that people who rated a task at which they succeeded as more important than a task at which they failed also cheated on a series of math problems, but only when they could rationalize their cheating as unintentional. Experiment 2 replicated this finding and demonstrated that a self-report measure of self-deception did not predict this rationalized cheating. Experiment 3 replicated Experiments 1 and 2 and ruled out several alternative explanations. These experiments suggest that people who show motivated processing in ego-protective domains also show motivated processing in extrinsic domains. These experiments also introduce a new measurement procedure for differentiating between intentional versus rationalized cheating.
Resumo:
The field of protein crystallography inspires and enthrals, whether it be for the beauty and symmetry of a perfectly formed protein crystal, the unlocked secrets of a novel protein fold, or the precise atomic-level detail yielded from a protein-ligand complex. Since 1958, when the first protein structure was solved, there have been tremendous advances in all aspects of protein crystallography, from protein preparation and crystallisation through to diffraction data measurement and structure refinement. These advances have significantly reduced the time required to solve protein crystal structures, while at the same time substantially improving the quality and resolution of the resulting structures. Moreover, the technological developments have induced researchers to tackle ever more complex systems, including ribosomes and intact membrane-bound proteins, with a reasonable expectation of success. In this review, the steps involved in determining a protein crystal structure are described and the impact of recent methodological advances identified. Protein crystal structures have proved to be extraordinarily useful in medicinal chemistry research, particularly with respect to inhibitor design. The precise interaction between a drug and its receptor can be visualised at the molecular level using protein crystal structures, and this information then used to improve the complementarity and thus increase the potency and selectivity of an inhibitor. The use of protein crystal structures in receptor-based drug design is highlighted by (i) HIV protease, (ii) influenza virus neuraminidase and (iii) prostaglandin H-2-synthetase. These represent, respectively, examples of protein crystal structures that (i) influenced the design of drugs currently approved for use in the treatment of HIV infection, (ii) led to the design of compounds currently in clinical trials for the treatment of influenza infection and (iii) could enable the design of highly specific non-steroidal anti-inflammatory drugs that lack the common side-effects of this drug class.