753 resultados para packages
Resumo:
A wide-ranging multiprofessional research project explored issues relating to the introduction of assistive technology into the existing homes of older people in order to provide them with the opportunity to remain at home. The financial relationship between assistive technology and packages of formal care was also explored. The costs of residential care and those of a number of packages containing differing quantities of assistive technology, formal care and informal care were compared. The analyses provide a strong financial case for substituting and/or supplementing formal care with assistive technology, even for individuals with quite disabling conditions. Although needs and hence the cost of provision rise with an increasing level of disability, the savings in care costs accrue quickly. The consideration of a variety of users with different needs and informal care provision, and occupying a very wide range of housing, leads to the conclusion that in comparison with traditional care packages, at worst, incorporating significant amounts of assistive technology into care packages is cost neutral, but that with careful specification of assistive technology major savings are feasible.
Resumo:
In the tender process, contractors often rely on subcontract and supply enquiries to calculate their bid prices. However, this integral part of the bidding process is not empirically articulated in the literature. Over 30 published materials on the tendering process of contractors that talk about enquiries were reviewed and found to be based mainly on experiential knowledge rather than systematic evidence. The empirical research here helps to describe the process of enquiries precisely, improve it in practice, and have some basis to support it in theory. Using a live participant observation case study approach, the whole tender process was shadowed in the offices of two of the top 20 UK civil engineering construction firms. This helped to investigate 15 research questions on how contractors enquire and obtain prices from subcontractors and suppliers. Forty-three subcontract enquiries and 18 supply enquiries were made across two different projects with average value of 7m. An average of 15 subcontract packages and seven supply packages was involved. Thus, two or three subcontractors or suppliers were invited to bid in each package. All enquiries were formulated by the estimator, with occasional involvement of three other personnel. Most subcontract prices were received in an average of 14 working days; and supply prices took five days. The findings show 10 main activities involved in processing enquiries and their durations, as well as wasteful practices associated with enquiries. Contractors should limit their enquiry invitations to a maximum of three per package, and optimize the waiting time for quotations in order to improve cost efficiency.
Resumo:
Once unit-cell dimensions have been determined from a powder diffraction data set and therefore the crystal system is known (e.g. orthorhombic), the method presented by Markvardsen, David, Johnson & Shankland [Acta Cryst. (2001), A57, 47-54] can be used to generate a table ranking the extinction symbols of the given crystal system according to probability. Markvardsen et al. tested a computer program (ExtSym) implementing the method against Pawley refinement outputs generated using the TF12LS program [David, Ibberson & Matthewman (1992). Report RAL-92-032. Rutherford Appleton Laboratory, Chilton, Didcot, Oxon, UK]. Here, it is shown that ExtSym can be used successfully with many well known powder diffraction analysis packages, namely DASH [David, Shankland, van de Streek, Pidcock, Motherwell & Cole (2006). J. Appl. Cryst. 39, 910-915], FullProf [Rodriguez-Carvajal (1993). Physica B, 192, 55-69], GSAS [Larson & Von Dreele (1994). Report LAUR 86-748. Los Alamos National Laboratory, New Mexico, USA], PRODD [Wright (2004). Z. Kristallogr. 219, 1-11] and TOPAS [Coelho (2003). Bruker AXS GmbH, Karlsruhe, Germany]. In addition, a precise description of the optimal input for ExtSym is given to enable other software packages to interface with ExtSym and to allow the improvement/modification of existing interfacing scripts. ExtSym takes as input the powder data in the form of integrated intensities and error estimates for these intensities. The output returned by ExtSym is demonstrated to be strongly dependent on the accuracy of these error estimates and the reason for this is explained. ExtSym is tested against a wide range of data sets, confirming the algorithm to be very successful at ranking the published extinction symbol as the most likely. (C) 2008 International Union of Crystallography Printed in Singapore - all rights reserved.
Resumo:
The main activity carried out by the geophysicist when interpreting seismic data, in terms of both importance and time spent is tracking (or picking) seismic events. in practice, this activity turns out to be rather challenging, particularly when the targeted event is interrupted by discontinuities such as geological faults or exhibits lateral changes in seismic character. In recent years, several automated schemes, known as auto-trackers, have been developed to assist the interpreter in this tedious and time-consuming task. The automatic tracking tool available in modem interpretation software packages often employs artificial neural networks (ANN's) to identify seismic picks belonging to target events through a pattern recognition process. The ability of ANNs to track horizons across discontinuities largely depends on how reliably data patterns characterise these horizons. While seismic attributes are commonly used to characterise amplitude peaks forming a seismic horizon, some researchers in the field claim that inherent seismic information is lost in the attribute extraction process and advocate instead the use of raw data (amplitude samples). This paper investigates the performance of ANNs using either characterisation methods, and demonstrates how the complementarity of both seismic attributes and raw data can be exploited in conjunction with other geological information in a fuzzy inference system (FIS) to achieve an enhanced auto-tracking performance.
Resumo:
This paper illustrates how nonlinear programming and simulation tools, which are available in packages such as MATLAB and SIMULINK, can easily be used to solve optimal control problems with state- and/or input-dependent inequality constraints. The method presented is illustrated with a model of a single-link manipulator. The method is suitable to be taught to advanced undergraduate and Master's level students in control engineering.
Resumo:
Light Detection And Ranging (LIDAR) is an important modality in terrain and land surveying for many environmental, engineering and civil applications. This paper presents the framework for a recently developed unsupervised classification algorithm called Skewness Balancing for object and ground point separation in airborne LIDAR data. The main advantages of the algorithm are threshold-freedom and independence from LIDAR data format and resolution, while preserving object and terrain details. The framework for Skewness Balancing has been built in this contribution with a prediction model in which unknown LIDAR tiles can be categorised as “hilly” or “moderate” terrains. Accuracy assessment of the model is carried out using cross-validation with an overall accuracy of 95%. An extension to the algorithm is developed to address the overclassification issue for hilly terrain. For moderate terrain, the results show that from the classified tiles detached objects (buildings and vegetation) and attached objects (bridges and motorway junctions) are separated from bare earth (ground, roads and yards) which makes Skewness Balancing ideal to be integrated into geographic information system (GIS) software packages.
Resumo:
Many weeds occur in patches but farmers frequently spray whole fields to control the weeds in these patches. Given a geo-referenced weed map, technology exists to confine spraying to these patches. Adoption of patch spraying by arable farmers has, however, been negligible partly due to the difficulty of constructing weed maps. Building on previous DEFRA and HGCA projects, this proposal aims to develop and evaluate a machine vision system to automate the weed mapping process. The project thereby addresses the principal technical stumbling block to widespread adoption of site specific weed management (SSWM). The accuracy of weed identification by machine vision based on a single field survey may be inadequate to create herbicide application maps. We therefore propose to test the hypothesis that sufficiently accurate weed maps can be constructed by integrating information from geo-referenced images captured automatically at different times of the year during normal field activities. Accuracy of identification will also be increased by utilising a priori knowledge of weeds present in fields. To prove this concept, images will be captured from arable fields on two farms and processed offline to identify and map the weeds, focussing especially on black-grass, wild oats, barren brome, couch grass and cleavers. As advocated by Lutman et al. (2002), the approach uncouples the weed mapping and treatment processes and builds on the observation that patches of these weeds are quite stable in arable fields. There are three main aspects to the project. 1) Machine vision hardware. Hardware component parts of the system are one or more cameras connected to a single board computer (Concurrent Solutions LLC) and interfaced with an accurate Global Positioning System (GPS) supplied by Patchwork Technology. The camera(s) will take separate measurements for each of the three primary colours of visible light (red, green and blue) in each pixel. The basic proof of concept can be achieved in principle using a single camera system, but in practice systems with more than one camera may need to be installed so that larger fractions of each field can be photographed. Hardware will be reviewed regularly during the project in response to feedback from other work packages and updated as required. 2) Image capture and weed identification software. The machine vision system will be attached to toolbars of farm machinery so that images can be collected during different field operations. Images will be captured at different ground speeds, in different directions and at different crop growth stages as well as in different crop backgrounds. Having captured geo-referenced images in the field, image analysis software will be developed to identify weed species by Murray State and Reading Universities with advice from The Arable Group. A wide range of pattern recognition and in particular Bayesian Networks will be used to advance the state of the art in machine vision-based weed identification and mapping. Weed identification algorithms used by others are inadequate for this project as we intend to collect and correlate images collected at different growth stages. Plants grown for this purpose by Herbiseed will be used in the first instance. In addition, our image capture and analysis system will include plant characteristics such as leaf shape, size, vein structure, colour and textural pattern, some of which are not detectable by other machine vision systems or are omitted by their algorithms. Using such a list of features observable using our machine vision system, we will determine those that can be used to distinguish weed species of interest. 3) Weed mapping. Geo-referenced maps of weeds in arable fields (Reading University and Syngenta) will be produced with advice from The Arable Group and Patchwork Technology. Natural infestations will be mapped in the fields but we will also introduce specimen plants in pots to facilitate more rigorous system evaluation and testing. Manual weed maps of the same fields will be generated by Reading University, Syngenta and Peter Lutman so that the accuracy of automated mapping can be assessed. The principal hypothesis and concept to be tested is that by combining maps from several surveys, a weed map with acceptable accuracy for endusers can be produced. If the concept is proved and can be commercialised, systems could be retrofitted at low cost onto existing farm machinery. The outputs of the weed mapping software would then link with the precision farming options already built into many commercial sprayers, allowing their use for targeted, site-specific herbicide applications. Immediate economic benefits would, therefore, arise directly from reducing herbicide costs. SSWM will also reduce the overall pesticide load on the crop and so may reduce pesticide residues in food and drinking water, and reduce adverse impacts of pesticides on non-target species and beneficials. Farmers may even choose to leave unsprayed some non-injurious, environmentally-beneficial, low density weed infestations. These benefits fit very well with the anticipated legislation emerging in the new EU Thematic Strategy for Pesticides which will encourage more targeted use of pesticides and greater uptake of Integrated Crop (Pest) Management approaches, and also with the requirements of the Water Framework Directive to reduce levels of pesticides in water bodies. The greater precision of weed management offered by SSWM is therefore a key element in preparing arable farming systems for the future, where policy makers and consumers want to minimise pesticide use and the carbon footprint of farming while maintaining food production and security. The mapping technology could also be used on organic farms to identify areas of fields needing mechanical weed control thereby reducing both carbon footprints and also damage to crops by, for example, spring tines. Objective i. To develop a prototype machine vision system for automated image capture during agricultural field operations; ii. To prove the concept that images captured by the machine vision system over a series of field operations can be processed to identify and geo-reference specific weeds in the field; iii. To generate weed maps from the geo-referenced, weed plants/patches identified in objective (ii).
Resumo:
In most commercially available predictive control packages, there is a separation between economic optimisation and predictive control, although both algorithms may be part of the same software system. This method is compared in this article with two alternative approaches where the economic objectives are directly included in the predictive control algorithm. Simulations are carried out using the Tennessee Eastman process model.
Resumo:
The benefits and applications of virtual reality (VR) in the construction industry have been investigated for almost a decade. However, the practical implementation of VR in the construction industry has yet to reach maturity owing to technical constraints. The need for effective information management presents challenges: both transfer of building data to, and organisation of building information within, the virtual environment require consideration. This paper reviews the applications and benefits of VR in the built environment field and reports on a collaboration between Loughborough University and South Bank University to overcome constraints on the use of the overall VR model for whole lifecycle visualisation. The work at each research centre is concerned with an aspect of information management within VR applications for the built environment, and both data transfer and internal data organisation have been investigated. In this paper, similarities and differences between computer-aided design (CAD) and VR packages are first discussed. Three different approaches to the creation of VR models during the design stage are identified and described, with a view to providing sharing understanding across the interdiscipliary groups involved. The suitable organisation of building information within the virtual environment is then further investigated. This work focused on the visualisation of the degradation of a building, through its lifespan, with the view to provide a visual aid for developing an effective and economic project maintenance programme. Finally consideration is given to the potential of emerging standards to facilitate an integrated use of VR. The convergence towards similar data structures in VR and other construction packages may enable visualisation to be better utilised in the overall lifecycle model.
Resumo:
Virtual reality has the potential to improve visualisation of building design and construction, but its implementation in the industry has yet to reach maturity. Present day translation of building data to virtual reality is often unidirectional and unsatisfactory. Three different approaches to the creation of models are identified and described in this paper. Consideration is given to the potential of both advances in computer-aided design and the emerging standards for data exchange to facilitate an integrated use of virtual reality. Commonalities and differences between computer-aided design and virtual reality packages are reviewed, and trials of current system, are described. The trials have been conducted to explore the technical issues related to the integrated use of CAD and virtual environments within the house building sector of the construction industry and to investigate the practical use of the new technology.
Resumo:
In the earth sciences, data are commonly cast on complex grids in order to model irregular domains such as coastlines, or to evenly distribute grid points over the globe. It is common for a scientist to wish to re-cast such data onto a grid that is more amenable to manipulation, visualization, or comparison with other data sources. The complexity of the grids presents a significant technical difficulty to the regridding process. In particular, the regridding of complex grids may suffer from severe performance issues, in the worst case scaling with the product of the sizes of the source and destination grids. We present a mechanism for the fast regridding of such datasets, based upon the construction of a spatial index that allows fast searching of the source grid. We discover that the most efficient spatial index under test (in terms of memory usage and query time) is a simple look-up table. A kd-tree implementation was found to be faster to build and to give similar query performance at the expense of a larger memory footprint. Using our approach, we demonstrate that regridding of complex data may proceed at speeds sufficient to permit regridding on-the-fly in an interactive visualization application, or in a Web Map Service implementation. For large datasets with complex grids the new mechanism is shown to significantly outperform algorithms used in many scientific visualization packages.
Resumo:
What happens when digital coordination practices are introduced into the institutionalized setting of an engineering project? This question is addressed through an interpretive study that examines how a shared digital model becomes used in the late design stages of a major station refurbishment project. The paper contributes by mobilizing the idea of ‘hybrid practices’ to understand the diverse patterns of activity that emerge to manage digital coordination of design. It articulates how engineering and architecture professions develop different relationships with the shared model; the design team negotiates paper-based practices across organizational boundaries; and diverse practitioners probe the potential and limitations of the digital infrastructure. While different software packages and tools have become linked together into an integrated digital infrastructure, these emerging hybrid practices contrast with the interactions anticipated in practice and policy guidance and presenting new opportunities and challenges for managing project delivery. The study has implications for researchers working in the growing field of empirical work on engineering project organizations as it shows the importance of considering, and suggests new ways to theorise, the introduction of digital coordination practices into these institutionalized settings.
Resumo:
Global temperatures are expected to rise by between 1.1 and 6.4oC this century, depending, to a large extent, on the amount of carbon we emit to the atmosphere from now onwards. This warming is expected to have very negative effects on many peoples and ecosystems and, therefore, minimising our carbon emissions is a priority. Buildings are estimated to be responsible for around 50% of carbon emissions in the UK. Potential reductions involve both operational emissions, produced during use, and embodied emissions, produced during manufacture of materials and components, and during construction, refurbishments and demolition. To date the major effort has focused on reducing the, apparently, larger operational element, which is more readily quantifiable and reduction measures are relatively straightforward to identify and implement. Various studies have compared the magnitude of embodied and operational emissions, but have shown considerable variation in the relative values. This illustrates the difficulties in quantifying embodied, as it requires a detailed knowledge of the processes involved in the different life cycle phases, and requires the use of consistent system boundaries. However, other studies have established the interaction between operational and embodied, which demonstrates the importance of considering both elements together in order to maximise potential reductions. This is borne out in statements from both the Intergovernmental Panel on Climate Change and The Low Carbon Construction Innovation and Growth Team of the UK Government. In terms of meeting the 2020 and 2050 timeframes for carbon reductions it appears to be equally, if not more, important to consider early embodied carbon reductions, rather than just future operational reductions. Future decarbonisation of energy supply and more efficient lighting and M&E equipment installed in future refits is likely to significantly reduce operational emissions, lending further weight to this argument. A method of discounting to evaluate the present value of future carbon emissions would allow more realistic comparisons to be made on the relative importance of the embodied and operational elements. This paper describes the results of case studies on carbon emissions over the whole lifecycle of three buildings in the UK, compares four available software packages for determining embodied carbon and suggests a method of carbon discounting to obtain present values for future emissions. These form the initial stages of a research project aimed at producing information on embodied carbon for different types of building, components and forms of construction, in a simplified form, which can be readily used by building designers in optimising building design in terms of minimising overall carbon emissions. Keywords: Embodied carbon; carbon emission; building; operational carbon.
Resumo:
Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.
Resumo:
This paper proposes hybrid capital securities as a significant part of senior bank executive incentive compensation in light of Basel III, a new global regulatory standard on bank capital adequacy and liquidity agreed by the members of the Basel Committee on Banking Supervision. The committee developed Basel III in a response to the deficiencies in financial regulation brought about by the global financial crisis. Basel III strengthens bank capital requirements and introduces new regulatory requirements on bank liquidity and bank leverage. The hybrid bank capital securities we propose for bank executives’ compensation are preferred shares and subordinated debt that the June 2004 Basel II regulatory framework recognised as other admissible forms of capital. The past two decades have witnessed dramatic increase in performance-related pay in the banking industry. Stakeholders such as shareholders, debtholders and regulators criticise traditional cash and equity-based compensation for encouraging bank executives’ excessive risk taking and short-termism, which has resulted in the failure of risk management in high profile banks during the global financial crisis. Paying compensation in the form of hybrid bank capital securities may align the interests of executives with those of stakeholders and help banks regain their reputation for prudence after years of aggressive risk-taking. Additionally, banks are desperately seeking to raise capital in order to bolster balance sheets damaged by the ongoing credit crisis. Tapping their own senior employees with large incentive compensation packages may be a viable additional source of capital that is politically acceptable in times of large-scale bailouts of the financial sector and economically wise as it aligns the interests of the executives with the need for a stable financial system.