811 resultados para Architecture and Complexity


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Energy saving in mobile hydraulic machinery, aimed to fuel consumption reduction, has been one of the principal interests of many researchers and OEMs in the last years. Many different solutions have been proposed and investigated in the literature in order to improve the fuel efficiency, from novel system architectures and strategies to control the system to hybrid solutions. This thesis deals with the energy analysis of a hydraulic system of a middle size excavator through mathematical tools. In order to conduct the analyses the multibody mathematical model of the hydraulic excavator under investigation will be developed and validated on the basis of experimental activities, both on test bench and on the field. The analyses will be carried out considering the typical working cycles of the excavators defined by the JCMAS standard. The simulations results will be analysed and discussed in detail in order to define different solutions for the energy saving in LS hydraulic systems. Among the proposed energy saving solutions, energy recovery systems seem to be very promising for fuel consumption reduction in mobile machinery. In this thesis a novel energy recovery system architecture will be proposed and described in detail. Its dimensioning procedure takes advantage of the dynamic programming algorithm and a prototype will be realized and tested on the excavator under investigation. Finally the energy saving proposed solutions will be compared referring to the standard machinery architecture and a novel hybrid excavator with an energy saving up to 11% will be presented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this thesis, we consider four different scenarios of interest in modern satellite communications. For each scenario, we will propose the use of advanced solutions aimed at increasing the spectral efficiency of the communication links. First, we will investigate the optimization of the current standard for digital video broadcasting. We will increase the symbol rate of the signal and determine the optimal signal bandwidth. We will apply the time packing technique and propose a specifically design constellation. We will then compare some receiver architectures with different performance and complexity. The second scenario still addresses broadcast transmissions, but in a network composed of two satellites. We will compare three alternative transceiver strategies, namely, signals completely overlapped in frequency, frequency division multiplexing, and the Alamouti space-time block code, and, for each technique, we will derive theoretical results on the achievable rates. We will also evaluate the performance of said techniques in three different channel models. The third scenario deals with the application of multiuser detection in multibeam satellite systems. We will analyze a case in which the users are near the edge of the coverage area and, hence, they experience a high level of interference from adjacent cells. Also in this case, three different approaches will be compared. A classical approach in which each beam carries information for a user, a cooperative solution based on time division multiplexing, and the Alamouti scheme. The information theoretical analysis will be followed by the study of practical coded schemes. We will show that the theoretical bounds can be approached by a properly designed code or bit mapping. Finally, we will consider an Earth observation scenario, in which data is generated on the satellite and then transmitted to the ground. We will study two channel models, taking into account one or two transmit antennas, and apply techniques such as time and frequency packing, signal predistortion, multiuser detection and the Alamouti scheme.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aquifers are a vital water resource whose quality characteristics must be safeguarded or, if damaged, restored. The extent and complexity of aquifer contamination is related to characteristics of the porous medium, the influence of boundary conditions, and the biological, chemical and physical processes. After the nineties, the efforts of the scientists have been increased exponentially in order to find an efficient way for estimating the hydraulic parameters of the aquifers, and thus, recover the contaminant source position and its release history. To simplify and understand the influence of these various factors on aquifer phenomena, it is common for researchers to use numerical and controlled experiments. This work presents some of these methods, applying and comparing them on data collected during laboratory, field and numerical tests. The work is structured in four parts which present the results and the conclusions of the specific objectives.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In many Environmental Information Systems the actual observations arise from a discrete monitoring network which might be rather heterogeneous in both location and types of measurements made. In this paper we describe the architecture and infrastructure for a system, developed as part of the EU FP6 funded INTAMAP project, to provide a service oriented solution that allows the construction of an interoperable, automatic, interpolation system. This system will be based on the Open Geospatial Consortium’s Web Feature Service (WFS) standard. The essence of our approach is to extend the GML3.1 observation feature to include information about the sensor using SensorML, and to further extend this to incorporate observation error characteristics. Our extended WFS will accept observations, and will store them in a database. The observations will be passed to our R-based interpolation server, which will use a range of methods, including a novel sparse, sequential kriging method (only briefly described here) to produce an internal representation of the interpolated field resulting from the observations currently uploaded to the system. The extended WFS will then accept queries, such as ‘What is the probability distribution of the desired variable at a given point’, ‘What is the mean value over a given region’, or ‘What is the probability of exceeding a certain threshold at a given location’. To support information-rich transfer of complex and uncertain predictions we are developing schema to represent probabilistic results in a GML3.1 (object-property) style. The system will also offer more easily accessible Web Map Service and Web Coverage Service interfaces to allow users to access the system at the level of complexity they require for their specific application. Such a system will offer a very valuable contribution to the next generation of Environmental Information Systems in the context of real time mapping for monitoring and security, particularly for systems that employ a service oriented architecture.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper introduces a compact form for the maximum value of the non-Archimedean in Data Envelopment Analysis (DEA) models applied for the technology selection, without the need to solve a linear programming (LP). Using this method the computational performance the common weight multi-criteria decision-making (MCDM) DEA model proposed by Karsak and Ahiska (International Journal of Production Research, 2005, 43(8), 1537-1554) is improved. This improvement is significant when computational issues and complexity analysis are a concern.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study reports a qualitative phenomenological investigation of anger and anger-related aggression in the context of the lives of individual women. Semistructured interviews with five women are analyzed using interpretative phenomenological analysis. This inductive approach aims to capture the richness and complexity of the lived experience of emotional life. In particular, it draws attention to the context-dependent and relational dimension of angry feelings and aggressive behavior. Three analytic themes are presented here: the subjective experience of anger, which includes the perceptual confusion and bodily change felt by the women when angry, crying, and the presence of multiple emotions; the forms and contexts of aggression, paying particular attention to the range of aggressive strategies used; and anger as moral judgment, in particular perceptions of injustice and unfairness. The authors conclude by examining the analytic observations in light of phenomenological thinking.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The concept of mobility, related to technology in particular, has evolved dramatically over the last two decades including: (i) hardware ranging from walkmans to Ipods, laptops to netbooks, PDAs to 3G mobile phone; (ii) software supporting multiple audio and video formats driven by ubiquitous mobile wireless access, WiMax, automations such as radio frequency ID tracking and location aware services. Against the background of increasing budget deficit, along with the imperative for efficiency gains, leveraging ICT and mobility promises for work related tasks, in a public administration context, in emerging markets, point to multiple possible paths. M-government transition involve both technological changes and adoption to deliver government services differently (e.g. 24/7, error free, anywhere to the same standards) but also the design of digital strategies including possibly competing m-government models, the re-shaping of cultural practices, the creation of m-policies and legislations, the structuring of m-services architecture, and progress regarding m-governance. While many emerging countries are already offering e-government services and are gearing-up for further m-government activities, little is actually known about the resistance that is encountered, as a reflection of civil servants' current standing, before any further macro-strategies are deployed. Drawing on the resistance and mobility literature, this chapter investigates how civil servants' behaviors, in an emerging country technological environment, through their everyday practice, react and resist the influence of m-government transition. The findings points to four main type of resistance namely: i) functional resistance; ii) ideological resistance; iii) market driven resistance and iv) geographical resistance. Policy implication are discussed in the specific context of emerging markets. © 2011, IGI Global.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Respiration is a complex activity. If the relationship between all neurological and skeletomuscular interactions was perfectly understood, an accurate dynamic model of the respiratory system could be developed and the interaction between different inputs and outputs could be investigated in a straightforward fashion. Unfortunately, this is not the case and does not appear to be viable at this time. In addition, the provision of appropriate sensor signals for such a model would be a considerable invasive task. Useful quantitative information with respect to respiratory performance can be gained from non-invasive monitoring of chest and abdomen motion. Currently available devices are not well suited in application for spirometric measurement for ambulatory monitoring. A sensor matrix measurement technique is investigated to identify suitable sensing elements with which to base an upper body surface measurement device that monitors respiration. This thesis is divided into two main areas of investigation; model based and geometrical based surface plethysmography. In the first instance, chapter 2 deals with an array of tactile sensors that are used as progression of existing and previously investigated volumetric measurement schemes based on models of respiration. Chapter 3 details a non-model based geometrical approach to surface (and hence volumetric) profile measurement. Later sections of the thesis concentrate upon the development of a functioning prototype sensor array. To broaden the application area the study has been conducted as it would be fore a generically configured sensor array. In experimental form the system performance on group estimation compares favourably with existing system on volumetric performance. In addition provides continuous transient measurement of respiratory motion within an acceptable accuracy using approximately 20 sensing elements. Because of the potential size and complexity of the system it is possible to deploy it as a fully mobile ambulatory monitoring device, which may be used outside of the laboratory. It provides a means by which to isolate coupled physiological functions and thus allows individual contributions to be analysed separately. Thus facilitating greater understanding of respiratory physiology and diagnostic capabilities. The outcome of the study is the basis for a three-dimensional surface contour sensing system that is suitable for respiratory function monitoring and has the prospect with future development to be incorporated into a garment based clinical tool.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The success of mainstream computing is largely due to the widespread availability of general-purpose architectures and of generic approaches that can be used to solve real-world problems cost-effectively and across a broad range of application domains. In this chapter, we propose that a similar generic framework is used to make the development of autonomic solutions cost effective, and to establish autonomic computing as a major approach to managing the complexity of today’s large-scale systems and systems of systems. To demonstrate the feasibility of general-purpose autonomic computing, we introduce a generic autonomic computing framework comprising a policy-based autonomic architecture and a novel four-step method for the effective development of self-managing systems. A prototype implementation of the reconfigurable policy engine at the core of our architecture is then used to develop autonomic solutions for case studies from several application domains. Looking into the future, we describe a methodology for the engineering of self-managing systems that extends and generalises our autonomic computing framework further.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents some forecasting techniques for energy demand and price prediction, one day ahead. These techniques combine wavelet transform (WT) with fixed and adaptive machine learning/time series models (multi-layer perceptron (MLP), radial basis functions, linear regression, or GARCH). To create an adaptive model, we use an extended Kalman filter or particle filter to update the parameters continuously on the test set. The adaptive GARCH model is a new contribution, broadening the applicability of GARCH methods. We empirically compared two approaches of combining the WT with prediction models: multicomponent forecasts and direct forecasts. These techniques are applied to large sets of real data (both stationary and non-stationary) from the UK energy markets, so as to provide comparative results that are statistically stronger than those previously reported. The results showed that the forecasting accuracy is significantly improved by using the WT and adaptive models. The best models on the electricity demand/gas price forecast are the adaptive MLP/GARCH with the multicomponent forecast; their MSEs are 0.02314 and 0.15384 respectively.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There are been a resurgence of interest in the neural networks field in recent years, provoked in part by the discovery of the properties of multi-layer networks. This interest has in turn raised questions about the possibility of making neural network behaviour more adaptive by automating some of the processes involved. Prior to these particular questions, the process of determining the parameters and network architecture required to solve a given problem had been a time consuming activity. A number of researchers have attempted to address these issues by automating these processes, concentrating in particular on the dynamic selection of an appropriate network architecture.The work presented here specifically explores the area of automatic architecture selection; it focuses upon the design and implementation of a dynamic algorithm based on the Back-Propagation learning algorithm. The algorithm constructs a single hidden layer as the learning process proceeds using individual pattern error as the basis of unit insertion. This algorithm is applied to several problems of differing type and complexity and is found to produce near minimal architectures that are shown to have a high level of generalisation ability.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The revival of terracotta and faience in British architecture was widespread, dramatic in its results and, for two decades, the subject of intense debate. However the materials have been frequently denigrated and more generally disregarded by both architects and historians. This study sets out to record and explain the rise and fall of interest in terracotta and faience, the extent and nature of the industry and the range of architectural usage in the Victorian, Edwardian and inter-war periods. The first two chapters record the faltering use of terracotta as an 'artificial stone', until the material gained its own identity, largely through the appreciation of Italian architecture. In the mid-Victorian period, terracotta will be seen to have become symbolic of the philosophy of the Victoria and Albert Museum and its Art School in attempting to reform both architecture and the decorative arts. The adoption of terracotta was furthered as much by industrial as aesthetic factors; three chapters examine how the exploitation of coal-measure clays, developments in the processes of manufacture, the changing motivation of industrialists and differing economics of production served to promote and then to hinder expansion and adaptation. The practical values of economy, durability and fire-resistance and the aesthetic potential, seen in terms of colour and decorative and sculptural modelling, became inter-related in the work of the architects who made extensive use of architectural ceramics. A correlation emerges between the free Gothic style, exemplified by the designs of Alfred Waterhouse and the use of red terracotta supplied from Ruabon, and between the eclectic Renaissance style and a buff material produced by different manufacturers.These patterns were modified as a result of the adoption of faience for facing external walls as well as interiors, and because of the new architectural requirements and tastes of the twentieth century. The general timidity in exploiting the scope for polychromatic decoration and the increasing opposition to architectural ceramics is contrasted with the most successful schemes produced for cinemas, chain-stores and factories. In the last chapter, those undertaken by the Hathern Station Brick and Terracotta Company between 1896 and 1939 are used as a case study; they confirm that manufacturers, architects and clients were all committed to creating a modern and yet decorative architecture, appropriate for new building types and that would appeal to and be comprehensible to the public.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this work was to investigate human contrast perception at various contrast levels ranging from detection threshold to suprathreshold levels by using psychophysical techniques. The work consists of two major parts. The first part deals with contrast matching, and the second part deals with contrast discrimination. Contrast matching technique was used to determine when the perceived contrasts of different stimuli were equal. The effects of spatial frequency, stimulus area, image complexity and chromatic contrast on contrast detection thresholds and matches were studied. These factors influenced detection thresholds and perceived contrast at low contrast levels. However, at suprathreshold contrast levels perceived contrast became directly proportional to the physical contrast of the stimulus and almost independent of factors affecting detection thresholds. Contrast discrimination was studied by measuring contrast increment thresholds which indicate the smallest detectable contrast difference. The effects of stimulus area, external spatial image noise and retinal illuminance were studied. The above factors affected contrast detection thresholds and increment thresholds measured at low contrast levels. At high contrast levels, contrast increment thresholds became very similar so that the effect of these factors decreased. Human contrast perception was modelled by regarding the visual system as a simple image processing system. A visual signal is first low-pass filtered by the ocular optics. This is followed by spatial high-pass filtering by the neural visual pathways, and addition of internal neural noise. Detection is mediated by a local matched filter which is a weighted replica of the stimulus whose sampling efficiency decreases with increasing stimulus area and complexity. According to the model, the signals to be compared in a contrast matching task are first transferred through the early image processing stages mentioned above. Then they are filtered by a restoring transfer function which compensates for the low-level filtering and limited spatial integration at high contrast levels. Perceived contrasts of the stimuli are equal when the restored responses to the stimuli are equal. According to the model, the signals to be discriminated in a contrast discrimination task first go through the early image processing stages, after which signal dependent noise is added to the matched filter responses. The decision made by the human brain is based on the comparison between the responses of the matched filters to the stimuli, and the accuracy of the decision is limited by pre- and post-filter noises. The model for human contrast perception could accurately describe the results of contrast matching and discrimination in various conditions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVES: The objective of this research was to design a clinical decision support system (CDSS) that supports heterogeneous clinical decision problems and runs on multiple computing platforms. Meeting this objective required a novel design to create an extendable and easy to maintain clinical CDSS for point of care support. The proposed solution was evaluated in a proof of concept implementation. METHODS: Based on our earlier research with the design of a mobile CDSS for emergency triage we used ontology-driven design to represent essential components of a CDSS. Models of clinical decision problems were derived from the ontology and they were processed into executable applications during runtime. This allowed scaling applications' functionality to the capabilities of computing platforms. A prototype of the system was implemented using the extended client-server architecture and Web services to distribute the functions of the system and to make it operational in limited connectivity conditions. RESULTS: The proposed design provided a common framework that facilitated development of diversified clinical applications running seamlessly on a variety of computing platforms. It was prototyped for two clinical decision problems and settings (triage of acute pain in the emergency department and postoperative management of radical prostatectomy on the hospital ward) and implemented on two computing platforms-desktop and handheld computers. CONCLUSIONS: The requirement of the CDSS heterogeneity was satisfied with ontology-driven design. Processing of application models described with the help of ontological models allowed having a complex system running on multiple computing platforms with different capabilities. Finally, separation of models and runtime components contributed to improved extensibility and maintainability of the system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Phosphorylation processes are common post-transductional mechanisms, by which it is possible to modulate a number of metabolic pathways. Proteins are highly sensitive to phosphorylation, which governs many protein-protein interactions. The enzymatic activity of some protein tyrosine-kinases is under tyrosine-phosphorylation control, as well as several transmembrane anion-fluxes and cation exchanges. In addition, phosphorylation reactions are involved in intra and extra-cellular 'cross-talk' processes. Early studies adopted laboratory animals to study these little known phosphorylation processes. The main difficulty encountered with these animal techniques was obtaining sufficient kinase or phosphatase activity suitable for studying the enzymatic process. Large amounts of biological material from organs, such as the liver and spleen were necessary to conduct such work with protein kinases. Subsequent studies revealed the ubiquity and complexity of phosphorylation processes and techniques evolved from early rat studies to the adaptation of more rewarding in vitro models. These involved human erythrocytes, which are a convenient source both for the enzymes, we investigated and for their substrates. This preliminary work facilitated the development of more advanced phosphorylative models that are based on cell lines. © 2005 Elsevier B.V. All rights reserved.