35 resultados para Software-based techniques
Resumo:
The G-protein coupled receptors--or GPCRs--comprise simultaneously one of the largest and one of the most multi-functional protein families known to modern-day molecular bioscience. From a drug discovery and pharmaceutical industry perspective, the GPCRs constitute one of the most commercially and economically important groups of proteins known. The GPCRs undertake numerous vital metabolic functions and interact with a hugely diverse range of small and large ligands. Many different methodologies have been developed to efficiently and accurately classify the GPCRs. These range from motif-based techniques to machine learning as well as a variety of alignment-free techniques based on the physiochemical properties of sequences. We review here the available methodologies for the classification of GPCRs. Part of this work focuses on how we have tried to build the intrinsically hierarchical nature of sequence relations, implicit within the family, into an adaptive approach to classification. Importantly, we also allude to some of the key innate problems in developing an effective approach to classifying the GPCRs: the lack of sequence similarity between the six classes that comprise the GPCR family and the low sequence similarity to other family members evinced by many newly revealed members of the family.
Resumo:
- We conduct a Meta-analysis of 54 papers that study the relationship between multinationality and firm performance. The aim is to understand if any systematic relationships exist between the characteristics of each study and the reported results of linear and curvilinear regressions to examine the multinationality-performance relationship. - Our main finding, robust to different specifications and to different weights for each observation, is that when analysis is based on non-US data, the reported return to multinationality is higher. However, this relationship for non-US firms is usually U-shaped rather than inverted U-shaped. This indicates that US firms face lower returns to internationalization than other firms but are less likely to incur losses in the early stages of internationalization. - The findings also highlight the differences that are reported when comparing regression and non-regression based techniques. Our results suggest that in this area regression based analysis is more reliable than say ANOVA or other related approaches. - Other characteristics that influence the estimated rate of return and its shape across different studies are: the measure of multinationality used; size distribution of the sample; and the use of market-based indicators to measure firm performance. Finally, we find no evidence of publication bias.
Resumo:
Suboptimal maternal nutrition during gestation results in the establishment of long-term phenotypic changes and an increased disease risk in the offspring. To elucidate how such environmental sensitivity results in physiological outcomes, the molecular characterisation of these offspring has become the focus of many studies. However, the likely modification of key cellular processes such as metabolism in response to maternal undernutrition raises the question of whether the genes typically used as reference constants in gene expression studies are suitable controls. Using a mouse model of maternal protein undernutrition, we have investigated the stability of seven commonly used reference genes (18s, Hprt1, Pgk1, Ppib, Sdha, Tbp and Tuba1) in a variety of offspring tissues including liver, kidney, heart, retro-peritoneal and inter-scapular fat, extra-embryonic placenta and yolk sac, as well as in the preimplantation blastocyst and blastocyst-derived embryonic stem cells. We find that although the selected reference genes are all highly stable within this system, they show tissue, treatment and sex-specific variation. Furthermore, software-based selection approaches rank reference genes differently and do not always identify genes which differ between conditions. Therefore, we recommend that reference gene selection for gene expression studies should be thoroughly validated for each tissue of interest. © 2011 Elsevier Inc.
Resumo:
A refractive index sensing system has been demonstrated, which is based upon an in-line fibre long period grating Mach-Zehnder interferometer with a heterodyne interrogation technique. This sensing system has comparable accuracy to laboratory-based techniques used in industry such as high performance liquid chromatography and UV spectroscopy. The advantage of this system is that measurements can be made in-situ for applications in continuous process control. Compared to other refractive index sensing schemes using LPGs, this approach has two main advantages. Firstly, the system relies on a simple optical interrogation system and therefore has the real potential for being low cost, and secondly, so far as we are aware it provides the highest refractive index resolution reported for any fibre LPG device.
Resumo:
The modern grid system or the smart grid is likely to be populated with multiple distributed energy sources, e.g. wind power, PV power, Plug-in Electric Vehicle (PEV). It will also include a variety of linear and nonlinear loads. The intermittent nature of renewable energies like PV, wind turbine and increased penetration of Electric Vehicle (EV) makes the stable operation of utility grid system challenging. In order to ensure a stable operation of the utility grid system and to support smart grid functionalities such as, fault ride-through, frequency response, reactive power support, and mitigation of power quality issues, an energy storage system (ESS) could play an important role. A fast acting bidirectional energy storage system which can rapidly provide and absorb power and/or VARs for a sufficient time is a potentially valuable tool to support this functionality. Battery energy storage systems (BESS) are one of a range suitable energy storage system because it can provide and absorb power for sufficient time as well as able to respond reasonably fast. Conventional BESS already exist on the grid system are made up primarily of new batteries. The cost of these batteries can be high which makes most BESS an expensive solution. In order to assist moving towards a low carbon economy and to reduce battery cost this work aims to research the opportunities for the re-use of batteries after their primary use in low and ultra-low carbon vehicles (EV/HEV) on the electricity grid system. This research aims to develop a new generation of second life battery energy storage systems (SLBESS) which could interface to the low/medium voltage network to provide necessary grid support in a reliable and in cost-effective manner. The reliability/performance of these batteries is not clear, but is almost certainly worse than a new battery. Manufacturers indicate that a mixture of gradual degradation and sudden failure are both possible and failure mechanisms are likely to be related to how hard the batteries were driven inside the vehicle. There are several figures from a number of sources including the DECC (Department of Energy and Climate Control) and Arup and Cenex reports indicate anything from 70,000 to 2.6 million electric and hybrid vehicles on the road by 2020. Once the vehicle battery has degraded to around 70-80% of its capacity it is considered to be at the end of its first life application. This leaves capacity available for a second life at a much cheaper cost than a new BESS Assuming a battery capability of around 5-18kWhr (MHEV 5kWh - BEV 18kWh battery) and approximate 10 year life span, this equates to a projection of battery storage capability available for second life of >1GWhrs by 2025. Moreover, each vehicle manufacturer has different specifications for battery chemistry, number and arrangement of battery cells, capacity, voltage, size etc. To enable research and investment in this area and to maximize the remaining life of these batteries, one of the design challenges is to combine these hybrid batteries into a grid-tie converter where their different performance characteristics, and parameter variation can be catered for and a hot swapping mechanism is available so that as a battery ends it second life, it can be replaced without affecting the overall system operation. This integration of either single types of batteries with vastly different performance capability or a hybrid battery system to a grid-tie 3 energy storage system is different to currently existing work on battery energy storage systems (BESS) which deals with a single type of battery with common characteristics. This thesis addresses and solves the power electronic design challenges in integrating second life hybrid batteries into a grid-tie energy storage unit for the first time. This study details a suitable multi-modular power electronic converter and its various switching strategies which can integrate widely different batteries to a grid-tie inverter irrespective of their characteristics, voltage levels and reliability. The proposed converter provides a high efficiency, enhanced control flexibility and has the capability to operate in different operational modes from the input to output. Designing an appropriate control system for this kind of hybrid battery storage system is also important because of the variation of battery types, differences in characteristics and different levels of degradations. This thesis proposes a generalised distributed power sharing strategy based on weighting function aims to optimally use a set of hybrid batteries according to their relative characteristics while providing the necessary grid support by distributing the power between the batteries. The strategy is adaptive in nature and varies as the individual battery characteristics change in real time as a result of degradation for example. A suitable bidirectional distributed control strategy or a module independent control technique has been developed corresponding to each mode of operation of the proposed modular converter. Stability is an important consideration in control of all power converters and as such this thesis investigates the control stability of the multi-modular converter in detailed. Many controllers use PI/PID based techniques with fixed control parameters. However, this is not found to be suitable from a stability point-of-view. Issues of control stability using this controller type under one of the operating modes has led to the development of an alternative adaptive and nonlinear Lyapunov based control for the modular power converter. Finally, a detailed simulation and experimental validation of the proposed power converter operation, power sharing strategy, proposed control structures and control stability issue have been undertaken using a grid connected laboratory based multi-modular hybrid battery energy storage system prototype. The experimental validation has demonstrated the feasibility of this new energy storage system operation for use in future grid applications.
Resumo:
Society depends on complex IT systems created by integrating and orchestrating independently managed systems. The incredible increase in scale and complexity in them over the past decade means new software-engineering techniques are needed to help us cope with their inherent complexity. The key characteristic of these systems is that they are assembled from other systems that are independently controlled and managed. While there is increasing awareness in the software engineering community of related issues, the most relevant background work comes from systems engineering. The interacting algos that led to the Flash Crash represent an example of a coalition of systems, serving the purposes of their owners and cooperating only because they have to. The owners of the individual systems were competing finance companies that were often mutually hostile. Each system jealously guarded its own information and could change without consulting any other system.
Resumo:
One of the reasons for using variability in the software product line (SPL) approach (see Apel et al., 2006; Figueiredo et al., 2008; Kastner et al., 2007; Mezini & Ostermann, 2004) is to delay a design decision (Svahnberg et al., 2005). Instead of deciding on what system to develop in advance, with the SPL approach a set of components and a reference architecture are specified and implemented (during domain engineering, see Czarnecki & Eisenecker, 2000) out of which individual systems are composed at a later stage (during application engineering, see Czarnecki & Eisenecker, 2000). By postponing the design decisions in such a manner, it is possible to better fit the resultant system in its intended environment, for instance, to allow selection of the system interaction mode to be made after the customers have purchased particular hardware, such as a PDA vs. a laptop. Such variability is expressed through variation points which are locations in a software-based system where choices are available for defining a specific instance of a system (Svahnberg et al., 2005). Until recently it had sufficed to postpone committing to a specific system instance till before the system runtime. However, in the recent years the use and expectations of software systems in human society has undergone significant changes.Today's software systems need to be always available, highly interactive, and able to continuously adapt according to the varying environment conditions, user characteristics and characteristics of other systems that interact with them. Such systems, called adaptive systems, are expected to be long-lived and able to undertake adaptations with little or no human intervention (Cheng et al., 2009). Therefore, the variability now needs to be present also at system runtime, which leads to the emergence of a new type of system: adaptive systems with dynamic variability.
Resumo:
Two classes of software that are notoriously difficult to develop on their own are rapidly merging into one. This will affect every key service that we rely upon in modern society, yet a successful merge is unlikely to be achievable using software development techniques specific to either class. This paper explains the growing demand for software capable of both self-adaptation and high integrity, and advocates the use of a collection of "@runtime" techniques for its development, operation and management. We summarise early research into the development of such techniques, and discuss the remaining work required to overcome the great challenge of self-adaptive high-integrity software. © 2011 ACM.
Resumo:
There is a growing awareness that inflammatory diseases have an oxidative pathology, which can result in specific oxidation of amino acids within proteins. Antibody-based techniques for detecting oxidative posttranslational modifications (oxPTMs) are often used to identify the level of protein oxidation. There are many commercially available antibodies but some uncertainty to the potential level of cross reactivity they exhibit; moreover little information regarding the specific target epitopes is available. The aim of this work was to investigate the potential of antibodies to distinguish between select peptides with and without oxPTMs. Two peptides, one containing chlorotyrosine (DY-Cl-EDQQKQLC) and the other an unmodified tyrosine (DYEDQQKQLC) were synthesized and complementary anti-sera were produced in sheep using standard procedures. The anti-sera were tested using a half-sandwich ELISA and the anti-serum raised against the chloro-tyrosine containing peptide showed increased binding to the chlorinated peptide, whereas the control anti-serum bound similarly to both peptides. This suggested that antibodies can discriminate between similar peptide sequences with and without an oxidative modification. A peptide (STSYGTGC) and its variants with chlorotyrosine or nitrotyrosine were produced. The anti-sera showed substantially less binding to these alternative peptides than to the original peptides the anti-sera were produced against. Work is ongoing to test commercially available antibodies against the synthetic peptides as a comparison to the anti-sera produced in sheep. In conclusion, the antisera were able to distinguish between oxidatively modified and unmodified peptides, and two different sequences around the modification site.
Resumo:
The topic of this thesis is the development of knowledge based statistical software. The shortcomings of conventional statistical packages are discussed to illustrate the need to develop software which is able to exhibit a greater degree of statistical expertise, thereby reducing the misuse of statistical methods by those not well versed in the art of statistical analysis. Some of the issues involved in the development of knowledge based software are presented and a review is given of some of the systems that have been developed so far. The majority of these have moved away from conventional architectures by adopting what can be termed an expert systems approach. The thesis then proposes an approach which is based upon the concept of semantic modelling. By representing some of the semantic meaning of data, it is conceived that a system could examine a request to apply a statistical technique and check if the use of the chosen technique was semantically sound, i.e. will the results obtained be meaningful. Current systems, in contrast, can only perform what can be considered as syntactic checks. The prototype system that has been implemented to explore the feasibility of such an approach is presented, the system has been designed as an enhanced variant of a conventional style statistical package. This involved developing a semantic data model to represent some of the statistically relevant knowledge about data and identifying sets of requirements that should be met for the application of the statistical techniques to be valid. Those areas of statistics covered in the prototype are measures of association and tests of location.
Resumo:
A major application of computers has been to control physical processes in which the computer is embedded within some large physical process and is required to control concurrent physical processes. The main difficulty with these systems is their event-driven characteristics, which complicate their modelling and analysis. Although a number of researchers in the process system community have approached the problems of modelling and analysis of such systems, there is still a lack of standardised software development formalisms for the system (controller) development, particular at early stage of the system design cycle. This research forms part of a larger research programme which is concerned with the development of real-time process-control systems in which software is used to control concurrent physical processes. The general objective of the research in this thesis is to investigate the use of formal techniques in the analysis of such systems at their early stages of development, with a particular bias towards an application to high speed machinery. Specifically, the research aims to generate a standardised software development formalism for real-time process-control systems, particularly for software controller synthesis. In this research, a graphical modelling formalism called Sequential Function Chart (SFC), a variant of Grafcet, is examined. SFC, which is defined in the international standard IEC1131 as a graphical description language, has been used widely in industry and has achieved an acceptable level of maturity and acceptance. A comparative study between SFC and Petri nets is presented in this thesis. To overcome identified inaccuracies in the SFC, a formal definition of the firing rules for SFC is given. To provide a framework in which SFC models can be analysed formally, an extended time-related Petri net model for SFC is proposed and the transformation method is defined. The SFC notation lacks a systematic way of synthesising system models from the real world systems. Thus a standardised approach to the development of real-time process control systems is required such that the system (software) functional requirements can be identified, captured, analysed. A rule-based approach and a method called system behaviour driven method (SBDM) are proposed as a development formalism for real-time process-control systems.
Resumo:
Software development methodologies are becoming increasingly abstract, progressing from low level assembly and implementation languages such as C and Ada, to component based approaches that can be used to assemble applications using technologies such as JavaBeans and the .NET framework. Meanwhile, model driven approaches emphasise the role of higher level models and notations, and embody a process of automatically deriving lower level representations and concrete software implementations. The relationship between data and software is also evolving. Modern data formats are becoming increasingly standardised, open and empowered in order to support a growing need to share data in both academia and industry. Many contemporary data formats, most notably those based on XML, are self-describing, able to specify valid data structure and content, and can also describe data manipulations and transformations. Furthermore, while applications of the past have made extensive use of data, the runtime behaviour of future applications may be driven by data, as demonstrated by the field of dynamic data driven application systems. The combination of empowered data formats and high level software development methodologies forms the basis of modern game development technologies, which drive software capabilities and runtime behaviour using empowered data formats describing game content. While low level libraries provide optimised runtime execution, content data is used to drive a wide variety of interactive and immersive experiences. This thesis describes the Fluid project, which combines component based software development and game development technologies in order to define novel component technologies for the description of data driven component based applications. The thesis makes explicit contributions to the fields of component based software development and visualisation of spatiotemporal scenes, and also describes potential implications for game development technologies. The thesis also proposes a number of developments in dynamic data driven application systems in order to further empower the role of data in this field.
Resumo:
Many planning and control tools, especially network analysis, have been developed in the last four decades. The majority of them were created in military organization to solve the problem of planning and controlling research and development projects. The original version of the network model (i.e. C.P.M/PERT) was transplanted to the construction industry without the consideration of the special nature and environment of construction projects. It suited the purpose of setting up targets and defining objectives, but it failed in satisfying the requirement of detailed planning and control at the site level. Several analytical and heuristic rules based methods were designed and combined with the structure of C.P.M. to eliminate its deficiencies. None of them provides a complete solution to the problem of resource, time and cost control. VERT was designed to deal with new ventures. It is suitable for project evaluation at the development stage. CYCLONE, on the other hand, is concerned with the design and micro-analysis of the production process. This work introduces an extensive critical review of the available planning techniques and addresses the problem of planning for site operation and control. Based on the outline of the nature of site control, this research developed a simulation based network model which combines part of the logics of both VERT and CYCLONE. Several new nodes were designed to model the availability and flow of resources, the overhead and operating cost and special nodes for evaluating time and cost. A large software package is written to handle the input, the simulation process and the output of the model. This package is designed to be used on any microcomputer using MS-DOS operating system. Data from real life projects were used to demonstrate the capability of the technique. Finally, a set of conclusions are drawn regarding the features and limitations of the proposed model, and recommendations for future work are outlined at the end of this thesis.
Resumo:
The work described was carried out as part of a collaborative Alvey software engineering project (project number SE057). The project collaborators were the Inter-Disciplinary Higher Degrees Scheme of the University of Aston in Birmingham, BIS Applied Systems Ltd. (BIS) and the British Steel Corporation. The aim of the project was to investigate the potential application of knowledge-based systems (KBSs) to the design of commercial data processing (DP) systems. The work was primarily concerned with BIS's Structured Systems Design (SSD) methodology for DP systems development and how users of this methodology could be supported using KBS tools. The problems encountered by users of SSD are discussed and potential forms of computer-based support for inexpert designers are identified. The architecture for a support environment for SSD is proposed based on the integration of KBS and non-KBS tools for individual design tasks within SSD - The Intellipse system. The Intellipse system has two modes of operation - Advisor and Designer. The design, implementation and user-evaluation of Advisor are discussed. The results of a Designer feasibility study, the aim of which was to analyse major design tasks in SSD to assess their suitability for KBS support, are reported. The potential role of KBS tools in the domain of database design is discussed. The project involved extensive knowledge engineering sessions with expert DP systems designers. Some practical lessons in relation to KBS development are derived from this experience. The nature of the expertise possessed by expert designers is discussed. The need for operational KBSs to be built to the same standards as other commercial and industrial software is identified. A comparison between current KBS and conventional DP systems development is made. On the basis of this analysis, a structured development method for KBSs in proposed - the POLITE model. Some initial results of applying this method to KBS development are discussed. Several areas for further research and development are identified.
Resumo:
We investigate knowledge exchange among commercial organizations, the rationale behind it, and its effects on the market. Knowledge exchange is known to be beneficial for industry, but in order to explain it, authors have used high-level concepts like network effects, reputation, and trust. We attempt to formalize a plausible and elegant explanation of how and why companies adopt information exchange and why it benefits the market as a whole when this happens. This explanation is based on a multiagent model that simulates a market of software providers. Even though the model does not include any high-level concepts, information exchange naturally emerges during simulations as a successful profitable behavior. The conclusions reached by this agent-based analysis are twofold: 1) a straightforward set of assumptions is enough to give rise to exchange in a software market, and 2) knowledge exchange is shown to increase the efficiency of the market.