905 resultados para pacs: expert systems and other ai software and techniques
Resumo:
Soils represent a large carbon pool, approximately 1500 Gt, which is equivalent to almost three times the quantity stored in terrestrial biomass and twice the amount stored in the atmosphere. Any modification of land use or land management can induce variations in soil carbon stocks, even in agricultural systems that are perceived to be in a steady state. Tillage practices often induce soil aerobic conditions that are favourable to microbial activity and may lead to a degradation of soil structure. As a result, mineralisation of soil organic matter increases in the long term. The adoption of no-tillage systems and the maintenance of a permanent vegetation cover using Direct seeding Mulch-based Cropping system or DMC, may increase carbon levels in the topsoil. In Brazil, no-tillage practices (mainly DMC), were introduced approximately 30 years ago in the south in the Parana state, primarily as a means of reducing erosion. Subsequently, research has begun to study the management of the crop waste products and their effects on soil fertility, either in terms of phosphorus management, as a means of controlling soil acidity, or determining how manures can be applied in a more localised manner. The spread of no-till in Brazil has involved a large amount of extension work. The area under no-tillage is still increasing in the centre and north of the country and currently occupies ca. 20 million hectares, covering a diversity of environmental conditions, cropping systems and management practices. Most studies of Brazilian soils give rates of carbon storage in the top 40 cm of the soil of 0.4 to 1.7 t C ha(-1) per year, with the highest rates in the Cerrado region. However, caution must be taken when analysing DMC systems in terms of carbon sequestration. Comparisons should include changes in trace gas fluxes and should not be limited to a consideration of carbon storage in the soil alone if the full implications for global warming are to be assessed.
Resumo:
In this article, we examine the case of a system that cooperates with a “direct” user to plan an activity that some “indirect” user, not interacting with the system, should perform. The specific application we consider is the prescription of drugs. In this case, the direct user is the prescriber and the indirect user is the person who is responsible for performing the therapy. Relevant characteristics of the two users are represented in two user models. Explanation strategies are represented in planning operators whose preconditions encode the cognitive state of the indirect user; this allows tailoring the message to the indirect user's characteristics. Expansion of optional subgoals and selection among candidate operators is made by applying decision criteria represented as metarules, that negotiate between direct and indirect users' views also taking into account the context where explanation is provided. After the message has been generated, the direct user may ask to add or remove some items, or change the message style. The system defends the indirect user's needs as far as possible by mentioning the rationale behind the generated message. If needed, the plan is repaired and the direct user model is revised accordingly, so that the system learns progressively to generate messages suited to the preferences of people with whom it interacts.
Resumo:
This article describes an empirical, user-centred approach to explanation design. It reports three studies that investigate what patients want to know when they have been prescribed medication. The question is asked in the context of the development of a drug prescription system called OPADE. The system is aimed primarily at improving the prescribing behaviour of physicians, but will also produce written explanations for indirect users such as patients. In the first study, a large number of people were presented with a scenario about a visit to the doctor, and were asked to list the questions that they would like to ask the doctor about the prescription. On the basis of the results of the study, a categorization of question types was developed in terms of how frequently particular questions were asked. In the second and third studies a number of different explanations were generated in accordance with this categorization, and a new sample of people were presented with another scenario and were asked to rate the explanations on a number of dimensions. The results showed significant differences between the different explanations. People preferred explanations that included items corresponding to frequently asked questions in study 1. For an explanation to be considered useful, it had to include information about side effects, what the medication does, and any lifestyle changes involved. The implications of the results of the three studies are discussed in terms of the development of OPADE's explanation facility.
Resumo:
The ultimate criterion of success for interactive expert systems is that they will be used, and used to effect, by individuals other than the system developers. A key ingredient of success in most systems is involving users in the specification and development of systems as they are being built. However, until recently, system designers have paid little attention to ascertaining user needs and to developing systems with corresponding functionality and appropriate interfaces to match those requirements. Although the situation is beginning to change, many developers do not know how to go about involving users, or else tackle the problem in an inadequate way. This paper discusses the need for user involvement and considers why many developers are still not involving users in an optimal way. It looks at the different ways in which users can be involved in the development process and describes how to select appropriate techniques and methods for studying users. Finally, it discusses some of the problems inherent in involving users in expert system development, and recommends an approach which incorporates both ethnographic analysis and formal user testing.
Resumo:
The interest in animal welfare and welfare-friendly food products has been increasing in Europe over the last 10 years. The media, highlighting traditional farming methods and food scares such as those related to salmonella, bovine spongiform encephalopathy/variant Creutzfeldt-Jakob disease (BSE) and avian influenza, have brought the methods of animal farming to public attention. Concerns about farm animal welfare are reflected in the increase in the number of vegetarians and vegans and an increase in consumers wishing to purchase food which is more animal welfare-friendly. This paper considers consumers’ attitudes to animal welfare and to marketing practices, such as product labelling, welfare grading systems and food assurance marks using comparative data collected in a survey of around 1500 consumers in each of Great Britain, Italy and Sweden as part of the EU-funded Welfare Quality research project. The findings suggest a need for the provision of improved consumer information on the welfare provenance of food using appropriate product labelling and other methods.
In vitro cumulative gas production techniques: History, methodological considerations and challenges
Resumo:
Methodology used to measure in vitro gas production is reviewed to determine impacts of sources of variation on resultant gas production profiles (GPP). Current methods include measurement of gas production at constant pressure (e.g., use of gas tight syringes), a system that is inexpensive, but may be less sensitive than others thereby affecting its suitability in some situations. Automated systems that measure gas production at constant volume allow pressure to accumulate in the bottle, which is recorded at different times to produce a GPP, and may result in sufficiently high pressure that solubility of evolved gases in the medium is affected, thereby resulting in a recorded volume of gas that is lower than that predicted from stoichiometric calculations. Several other methods measure gas production at constant pressure and volume with either pressure transducers or sensors, and these may be manual, semi-automated or fully automated in operation. In these systems, gas is released as pressure increases, and vented gas is recorded. Agitating the medium does not consistently produce more gas with automated systems, and little or no effect of agitation was observed with manual systems. The apparatus affects GPP, but mathematical manipulation may enable effects of apparatus to be removed. The amount of substrate affects the volume of gas produced, but not rate of gas production, provided there is sufficient buffering capacity in the medium. Systems that use a very small amount of substrate are prone to experimental error in sample weighing. Effect of sample preparation on GPP has been found to be important, but further research is required to determine the optimum preparation that mimics animal chewing. Inoculum is the single largest source of variation in measuring GPP, as rumen fluid is variable and sampling schedules, diets fed to donor animals and ratios of rumen fluid/medium must be selected such that microbial activity is sufficiently high that it does not affect rate and extent of fermentation. Species of donor animal may also cause differences in GPP. End point measures can be mathematically manipulated to account for species differences, but rates of fermentation are not related. Other sources of inocula that have been used include caecal fluid (primarily for investigating hindgut fermentation in monogastrics), effluent from simulated rumen fermentation (e.g., 'Rusitec', which was as variable as rumen fluid), faeces, and frozen or freeze-dried rumen fluid (which were both less active than fresh rumen fluid). Use of mixtures of cell-free enzymes, or pure cultures of bacteria, may be a way of increasing GPP reproducibility, while reducing reliance on surgically modified animals. However, more research is required to develop these inocula. A number of media have been developed which buffer the incubation and provide relevant micro-nutrients to the microorganisms. To date, little research has been completed on relationships between the composition of the medium and measured GPP. However, comparing GPP from media either rich in N or N-free, allows assessment of contributions of N containing compounds in the sample. (c) 2005 Published by Elsevier B.V.
Resumo:
Building services are worth about 2% GDP and are essential for the effective and efficient operations of the building. It is increasingly recognised that the value of a building is related to the way it supports the client organisation’s ongoing business operations. Building services are central to the functional performance of buildings and provide the necessary conditions for health, well-being, safety and security of the occupants. They frequently comprise several technologically distinct sub-systems and their design and construction requires the involvement of numerous disciplines and trades. Designers and contractors working on the same project are frequently employed by different companies. Materials and equipment is supplied by a diverse range of manufacturers. Facilities managers are responsible for operation of the building service in use. The coordination between these participants is crucially important to achieve optimum performance, but too often is neglected. This leaves room for serious faults. The need for effective integration is important. Modern technology offers increasing opportunities for integrated personal-control systems for lighting, ventilation and security as well as interoperability between systems. Opportunities for a new mode of systems integration are provided by the emergence of PFI/PPP procurements frameworks. This paper attempts to establish how systems integration can be achieved in the process of designing, constructing and operating building services. The essence of the paper therefore is to envisage the emergent organisational responses to the realisation of building services as an interactive systems network.
Resumo:
This is the first of two articles presenting a detailed review of the historical evolution of mathematical models applied in the development of building technology, including conventional buildings and intelligent buildings. After presenting the technical differences between conventional and intelligent buildings, this article reviews the existing mathematical models, the abstract levels of these models, and their links to the literature for intelligent buildings. The advantages and limitations of the applied mathematical models are identified and the models are classified in terms of their application range and goal. We then describe how the early mathematical models, mainly physical models applied to conventional buildings, have faced new challenges for the design and management of intelligent buildings and led to the use of models which offer more flexibility to better cope with various uncertainties. In contrast with the early modelling techniques, model approaches adopted in neural networks, expert systems, fuzzy logic and genetic models provide a promising method to accommodate these complications as intelligent buildings now need integrated technologies which involve solving complex, multi-objective and integrated decision problems.
Resumo:
This article is the second part of a review of the historical evolution of mathematical models applied in the development of building technology. The first part described the current state of the art and contrasted various models with regard to the applications to conventional buildings and intelligent buildings. It concluded that mathematical techniques adopted in neural networks, expert systems, fuzzy logic and genetic models, that can be used to address model uncertainty, are well suited for modelling intelligent buildings. Despite the progress, the possible future development of intelligent buildings based on the current trends implies some potential limitations of these models. This paper attempts to uncover the fundamental limitations inherent in these models and provides some insights into future modelling directions, with special focus on the techniques of semiotics and chaos. Finally, by demonstrating an example of an intelligent building system with the mathematical models that have been developed for such a system, this review addresses the influences of mathematical models as a potential aid in developing intelligent buildings and perhaps even more advanced buildings for the future.
Resumo:
This article reviews current technological developments, particularly Peer-to-Peer technologies and Distributed Data Systems, and their value to community memory projects, particularly those concerned with the preservation of the cultural, literary and administrative data of cultures which have suffered genocide or are at risk of genocide. It draws attention to the comparatively good representation online of genocide denial groups and changes in the technological strategies of holocaust denial and other far-right groups. It draws on the author's work in providing IT support for a UK-based Non-Governmental Organization providing support for survivors of genocide in Rwanda.
Resumo:
Examination by high temperature GC (HTGC) of the methyl esters of the so-called 'ARN' naphthenic acids from crude oils of North Sea UK, Norwegian Sea and West African oilfields revealed the distributions of resolved 4-8 ring C-80 tetra acids and trace amounts of other acids. Whilst all three oils contained apparently the same the proportions of each differed, possibly reflecting the growth tempe acids, ratures of the archaebacteria from which the acids are assumed to have originated. The structures of the 4, 5, 7 and 8 ring acids are tentatively assigned by comparison with the known 6 ring acid and related natural products and an HPLC method for the isolation of the individual acids is described. ESI-MS of individual acids isolated by preparative HPLC established the elution order of the 4-8 ring acids on the HPLC and HTGC systems and revealed the presence of previously unreported acids tentatively identified as C-81 and C-82 7 and 8 ring analogues.
Resumo:
Increasingly, distributed systems are being used to host all manner of applications. While these platforms provide a relatively cheap and effective means of executing applications, so far there has been little work in developing tools and utilities that can help application developers understand problems with the supporting software, or the executing applications. To fully understand why an application executing on a distributed system is not behaving as would be expected it is important that not only the application, but also the underlying middleware, and the operating system are analysed too, otherwise issues could be missed and certainly overall performance profiling and fault diagnoses would be harder to understand. We believe that one approach to profiling and the analysis of distributed systems and the associated applications is via the plethora of log files generated at runtime. In this paper we report on a system (Slogger), that utilises various emerging Semantic Web technologies to gather the heterogeneous log files generated by the various layers in a distributed system and unify them in common data store. Once unified, the log data can be queried and visualised in order to highlight potential problems or issues that may be occurring in the supporting software or the application itself.
Resumo:
The usefulness of motor subtypes of delirium is unclear due to inconsistency in subtyping methods and a lack of validation with objective measures of activity. The activity of 40 patients was measured over 24 h with a discrete accelerometer-based activity monitor. The continuous wavelet transform (CWT) with various mother wavelets were applied to accelerometry data from three randomly selected patients with DSM-IV delirium that were readily divided into hyperactive, hypoactive, and mixed motor subtypes. A classification tree used the periods of overall movement as measured by the discrete accelerometer-based monitor as determining factors for which to classify these delirious patients. This data used to create the classification tree were based upon the minimum, maximum, standard deviation, and number of coefficient values, generated over a range of scales by the CWT. The classification tree was subsequently used to define the remaining motoric subtypes. The use of a classification system shows how delirium subtypes can be categorized in relation to overall motoric behavior. The classification system was also implemented to successfully define other patient motoric subtypes. Motor subtypes of delirium defined by observed ward behavior differ in electronically measured activity levels.
Resumo:
The transport of stratospheric air into the troposphere within deep convection was investigated using the Met Office Unified Model version 6.1. Three cases were simulated in which convective systems formed over the UK in the summer of 2005. For each of these three cases, simulations were performed on a grid having 4 km horizontal grid spacing in which the convection was parameterized and on a grid having 1 km horizontal grid spacing, which permitted explicit representation of the largest energy-containing scales of deep convection. Cross-tropopause transport was diagnosed using passive tracers that were initialized above the dynamically defined tropopause (2 potential vorticity unit surface) with a mixing ratio of 1. Although the synoptic-scale environment and triggering mechanisms varied between the cases, the total simulated transport was similar in all three cases. The total stratosphere-to-troposphere transport over the lifetime of the convective systems ranged from 25 to 100 kg/m2 across the simulated convective systems and resolutions, which corresponds to ∼5–20% of the total mass located within a stratospheric column extending 2 km above the tropopause. In all simulations, the transport into the lower troposphere (defined as below 3.5 km elevation) accounted for ∼1% of the total transport across the tropopause. In the 4 km runs most of the transport was due to parameterized convection, whereas in the 1 km runs the transport was due to explicitly resolved convection. The largest difference between the simulations with different resolutions occurred in the one case of midlevel convection considered, in which the total transport in the 1 km grid spacing simulation with explicit convection was 4 times that in the 4 km grid spacing simulation with parameterized convection. Although the total cross-tropopause transport was similar, stratospheric tracer was deposited more deeply to near-surface elevations in the convection-parameterizing simulations than in convection-permitting simulations.