956 resultados para Large modeling projects
Resumo:
We re-mapped the soils of the Murray-Darling Basin (MDB) in 1995-1998 with a minimum of new fieldwork, making the most out of existing data. We collated existing digital soil maps and used inductive spatial modelling to predict soil types from those maps combined with environmental predictor variables. Lithology, Landsat Multi Spectral Scanner (Landsat MSS), the 9-s digital elevation model (DEM) of Australia and derived terrain attributes, all gridded to 250-m pixels, were the predictor variables. Because the basin-wide datasets were very large data mining software was used for modelling. Rule induction by data mining was also used to define the spatial domain of extrapolation for the extension of soil-landscape models from existing soil maps. Procedures to estimate the uncertainty associated with the predictions and quality of information for the new soil-landforms map of the MDB are described. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
This paper evaluates the role Strategic Research Partnerships (SRPs) play in Asia. Specific Asian institutional settings influence the roles of SRPs. Japan is regarded as a forerunner in the practice of SRPs. In Japan, lack of spillover channels, limited opportunities for mergers and acquisitions, weak university research and pressure for internal diversification motivate firms to form SRPs. In Korea, SRPs are regarded as a means to promote large-scale research projects. In Taiwan, SRPs are formed to facilitate technological diffusion. Empirical findings on SRPs, focusing on government-sponsored R&D consortia in Japan, are summarized. Issues regarding SRP formation, their effect on R&D spending of participating firms, and productivity, are examined. Reference is made to alternative forms of measurement of SRPs and their potential application to Asian countries is assessed. Enhancing the capacity of policy-makers to assess the extent and contribution of SRPs is considered to be a priority.
Resumo:
The paper presents a theory for modeling flow in anisotropic, viscous rock. This theory has originally been developed for the simulation of large deformation processes including the folding and kinking of multi-layered visco-elastic rock (Muhlhaus et al. [1,2]). The orientation of slip planes in the context of crystallographic slip is determined by the normal vector - the director - of these surfaces. The model is applied to simulate anisotropic mantle convection. We compare the evolution of flow patterns, Nusselt number and director orientations for isotropic and anisotropic rheologies. In the simulations we utilize two different finite element methodologies: The Lagrangian Integration Point Method Moresi et al [8] and an Eulerian formulation, which we implemented into the finite element based pde solver Fastflo (www.cmis.csiro.au/Fastflo/). The reason for utilizing two different finite element codes was firstly to study the influence of an anisotropic power law rheology which currently is not implemented into the Lagrangian Integration point scheme [8] and secondly to study the numerical performance of Eulerian (Fastflo)- and Lagrangian integration schemes [8]. It turned out that whereas in the Lagrangian method the Nusselt number vs time plot reached only a quasi steady state where the Nusselt number oscillates around a steady state value the Eulerian scheme reaches exact steady states and produces a high degree of alignment (director orientation locally orthogonal to velocity vector almost everywhere in the computational domain). In the simulations emergent anisotropy was strongest in terms of modulus contrast in the up and down-welling plumes. Mechanisms for anisotropic material behavior in the mantle dynamics context are discussed by Christensen [3]. The dominant mineral phases in the mantle generally do not exhibit strong elastic anisotropy but they still may be oriented by the convective flow. Thus viscous anisotropy (the main focus of this paper) may or may not correlate with elastic or seismic anisotropy.
Resumo:
Purpose - The purpose of this paper is to provide a framework for radio frequency identification (RFID) technology adoption considering company size and five dimensions of analysis: RFID applications, expected benefits business drivers or motivations barriers and inhibitors, and organizational factors. Design/methodology/approach - A framework for RFID adoption derived from literature and the practical experience on the subject is developed. This framework provides a conceptual basis for analyzing a survey conducted with 114 companies in Brazil. Findings - Many companies have been developing RFID initiatives in order to identify potential applications and map benefits associated with their implementation. The survey highlights the importance business drivers in the RFID implementation stage, and that companies implement RFID focusing on a few specific applications. However, there is a weak association between expected benefits and business challenges with the current level of RFID technology adoption in Brazil. Research limitations/implications - The paper is not exhaustive, since RFID adoption in Brazil is at early stages during the survey timeline. Originality/value - The main contribution of the paper is that it yields a framework for analyzing RFID technology adoption. The authors use this framework to analyze RFID adoption in Brazil, which proved to be a useful one for identifying key issues for technology adoption. The paper is useful to any researchers or practitioners who are focused on technology adoption, in particular, RFID technology.
Resumo:
Aim: To test the efficacy of a comprehensive health assessment using the CHAP tool in adults with an intellectual disability (ID). Method: A cluster randomised control design was used. The intervention group received the CHAP, while the control group received usual care. This tool directed carers to gather a health history, which was reviewed by the person’s general practitioner (GP) who completed a medical examination and a healthcare plan. The tool acted as an advocacy tool, a ticket-of-entry to the GPs surgery and educated the GP and the caregiver about the deficits in the healthcare of adults with ID. The healthcare of the participants was followed for one-year after intervention by the collection of data from GP and service providers’ notes. Also interviews were performed with all those involved. Results: We obtained a representative sample of adults with ID (RR%). We found the intervention group received a significant increase in many health promotion/disease prevention activities e.g. hearing screening was times and a Pap smear was times more likely to have occurred in the intervention groups.We also found a trend towards earlier detection of disease. Conclusions: The CHAP process improves the provision of health screening/promotion activities and should be implemented.
Resumo:
Background: A major goal in the post-genomic era is to identify and characterise disease susceptibility genes and to apply this knowledge to disease prevention and treatment. Rodents and humans have remarkably similar genomes and share closely related biochemical, physiological and pathological pathways. In this work we utilised the latest information on the mouse transcriptome as revealed by the RIKEN FANTOM2 project to identify novel human disease-related candidate genes. We define a new term patholog to mean a homolog of a human disease-related gene encoding a product ( transcript, anti-sense or protein) potentially relevant to disease. Rather than just focus on Mendelian inheritance, we applied the analysis to all potential pathologs regardless of their inheritance pattern. Results: Bioinformatic analysis and human curation of 60,770 RIKEN full-length mouse cDNA clones produced 2,578 sequences that showed similarity ( 70 - 85% identity) to known human-disease genes. Using a newly developed biological information extraction and annotation tool ( FACTS) in parallel with human expert analysis of 17,051 MEDLINE scientific abstracts we identified 182 novel potential pathologs. Of these, 36 were identified by computational tools only, 49 by human expert analysis only and 97 by both methods. These pathologs were related to neoplastic ( 53%), hereditary ( 24%), immunological ( 5%), cardio-vascular (4%), or other (14%), disorders. Conclusions: Large scale genome projects continue to produce a vast amount of data with potential application to the study of human disease. For this potential to be realised we need intelligent strategies for data categorisation and the ability to link sequence data with relevant literature. This paper demonstrates the power of combining human expert annotation with FACTS, a newly developed bioinformatics tool, to identify novel pathologs from within large-scale mouse transcript datasets.
Resumo:
This paper develops a multi-regional general equilibrium model for climate policy analysis based on the latest version of the MIT Emissions Prediction and Policy Analysis (EPPA) model. We develop two versions so that we can solve the model either as a fully inter-temporal optimization problem (forward-looking, perfect foresight) or recursively. The standard EPPA model on which these models are based is solved recursively, and it is necessary to simplify some aspects of it to make inter-temporal solution possible. The forward-looking capability allows one to better address economic and policy issues such as borrowing and banking of GHG allowances, efficiency implications of environmental tax recycling, endogenous depletion of fossil resources, international capital flows, and optimal emissions abatement paths among others. To evaluate the solution approaches, we benchmark each version to the same macroeconomic path, and then compare the behavior of the two versions under a climate policy that restricts greenhouse gas emissions. We find that the energy sector and CO(2) price behavior are similar in both versions (in the recursive version of the model we force the inter-temporal theoretical efficiency result that abatement through time should be allocated such that the CO(2) price rises at the interest rate.) The main difference that arises is that the macroeconomic costs are substantially lower in the forward-looking version of the model, since it allows consumption shifting as an additional avenue of adjustment to the policy. On the other hand, the simplifications required for solving the model as an optimization problem, such as dropping the full vintaging of the capital stock and fewer explicit technological options, likely have effects on the results. Moreover, inter-temporal optimization with perfect foresight poorly represents the real economy where agents face high levels of uncertainty that likely lead to higher costs than if they knew the future with certainty. We conclude that while the forward-looking model has value for some problems, the recursive model produces similar behavior in the energy sector and provides greater flexibility in the details of the system that can be represented. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Neotropical swarm-founding wasps build nests enclosed in a covering envelope, which makes it difficult to count individual births and deaths. Thus, knowledge of worker demography is very limited for swarm-founding species compared with that for independent-founding species. In this study, we explored the worker demography of the swarm-founding wasp Polybia paulista, the colony size of which usually exceeds several thousand adults. We considered each wasp colony as an open-population and estimated the survival probability, recruitment rate, and population size of workers using the developments of the Cormack-Jolly-Seber model. We found that capture probability varied considerably among the workers, probably due to age polyethism and/or task specialization. The daily survival rate of workers was high (around 0.97) throughout the season and was not related to the phase of colony development. On the other hand, the recruitment rate ranged from 0 to 0.37, suggesting that worker production was substantially less important than worker survival in determining worker population fluctuations. When we compared survival rates among worker groups of one colony, the mean daily survival rate was lower for founding workers than for progeny workers and tended to be higher in progeny workers that emerged in winter. These differences in survivorship patterns among worker cohorts would be related to worker foraging activity and/or level of parasitism.
Resumo:
On the basis of a spatially distributed sediment budget across a large basin, costs of achieving certain sediment reduction targets in rivers were estimated. A range of investment prioritization scenarios were tested to identify the most cost-effective strategy to control suspended sediment loads. The scenarios were based on successively introducing more information from the sediment budget. The relationship between spatial heterogeneity of contributing sediment sources on cost effectiveness of prioritization was investigated. Cost effectiveness was shown to increase with sequential introduction of sediment budget terms. The solution which most decreased cost was achieved by including spatial information linking sediment sources to the downstream target location. This solution produced cost curves similar to those derived using a genetic algorithm formulation. Appropriate investment prioritization can offer large cost savings because the magnitude of the costs can vary by several times depending on what type of erosion source or sediment delivery mechanism is targeted. Target settings which only consider the erosion source rates can potentially result in spending more money than random management intervention for achieving downstream targets. Coherent spatial patterns of contributing sediment emerge from the budget model and its many inputs. The heterogeneity in these patterns can be summarized in a succinct form. This summary was shown to be consistent with the cost difference between local and regional prioritization for three of four test catchments. To explain the effect for the fourth catchment, the detail of the individual sediment sources needed to be taken into account.
Resumo:
In the current work, we studied the effect of the nonionic detergent dodecyloctaethyleneglycol, C(12)E(8), on the structure and oligomeric form of the Na,K-ATPase membrane enzyme (sodium-potassium pump) in aqueous suspension, by means of small-angle X-ray scattering (SAXS). Samples composed of 2 mg/mL of Na,K-ATPase, extracted from rabbit kidney medulla, in the presence of a small amount of C(12)E(8) (0.005 mg/mL) and in larger concentrations ranging from 2.7 to 27 mg/mL did not present catalytic activity. Under this condition, an oligomerization of the alpha subunits is expected. SAXS data were analyzed by means of a global fitting procedure supposing that the scattering is due to two independent contributions: one coming from the enzyme and the other one from C(12)E(8) micelles. In the small detergent content (0.005 mg/mL), the SAXS results evidenced that Na,K-ATPase is associated into aggregates larger than (alpha beta)(2) form. When 2.7 mg/mL of C(12)E(8) is added, the data analysis revealed the presence of alpha(4) aggregates in the solution and some free micelles. Increasing the detergent amount up to 27 mg/mL does not disturb the alpha(4) aggregate: just more micelles of the same size and shape are proportionally formed in solution. We believe that our results shed light on a better understanding of how nonionic detergents induce subunit dissociation and reassembling to minimize the exposure of hydrophobic residues to the aqueous solvent.