973 resultados para Minimal Systems
Resumo:
This paper shows the importance of a holistic comprehension of the Earth as a living planet, where man inhabits and is exposed to environmental incidences of different nature. The aim of the paper here summarized is a reflection on all these concepts and scientific considerations related to the important role of men in the handling of natural hazards. Our Planet is an unstable and dynamical system highly sensitive to initial conditions, as proposed by Chaos theory (González-Miranda 2004); it is a complex organic whole, which responds to minimal variations which can affect several natural phenomena such as plate tectonics, solar flares, fluid turbulences, landscape formation, forest fires, growth and migration of populations and biological evolution. This is known as the “butterfly effect” (Lorenz 1972), which means that a small change of the system causes a chain of events leading to large-scale unpredictable consequences. The aim of this work is dwelling on the importance of the knowledge of these natural and catastrophic geological, biological and human systems so much sensible to equilibrium conditions, to prevent, avoid and mend their effects, and to face them in a resilient way
Resumo:
Debido al gran incremento de datos digitales que ha tenido lugar en los últimos años, ha surgido un nuevo paradigma de computación paralela para el procesamiento eficiente de grandes volúmenes de datos. Muchos de los sistemas basados en este paradigma, también llamados sistemas de computación intensiva de datos, siguen el modelo de programación de Google MapReduce. La principal ventaja de los sistemas MapReduce es que se basan en la idea de enviar la computación donde residen los datos, tratando de proporcionar escalabilidad y eficiencia. En escenarios libres de fallo, estos sistemas generalmente logran buenos resultados. Sin embargo, la mayoría de escenarios donde se utilizan, se caracterizan por la existencia de fallos. Por tanto, estas plataformas suelen incorporar características de tolerancia a fallos y fiabilidad. Por otro lado, es reconocido que las mejoras en confiabilidad vienen asociadas a costes adicionales en recursos. Esto es razonable y los proveedores que ofrecen este tipo de infraestructuras son conscientes de ello. No obstante, no todos los enfoques proporcionan la misma solución de compromiso entre las capacidades de tolerancia a fallo (o de manera general, las capacidades de fiabilidad) y su coste. Esta tesis ha tratado la problemática de la coexistencia entre fiabilidad y eficiencia de los recursos en los sistemas basados en el paradigma MapReduce, a través de metodologías que introducen el mínimo coste, garantizando un nivel adecuado de fiabilidad. Para lograr esto, se ha propuesto: (i) la formalización de una abstracción de detección de fallos; (ii) una solución alternativa a los puntos únicos de fallo de estas plataformas, y, finalmente, (iii) un nuevo sistema de asignación de recursos basado en retroalimentación a nivel de contenedores. Estas contribuciones genéricas han sido evaluadas tomando como referencia la arquitectura Hadoop YARN, que, hoy en día, es la plataforma de referencia en la comunidad de los sistemas de computación intensiva de datos. En la tesis se demuestra cómo todas las contribuciones de la misma superan a Hadoop YARN tanto en fiabilidad como en eficiencia de los recursos utilizados. ABSTRACT Due to the increase of huge data volumes, a new parallel computing paradigm to process big data in an efficient way has arisen. Many of these systems, called dataintensive computing systems, follow the Google MapReduce programming model. The main advantage of these systems is based on the idea of sending the computation where the data resides, trying to provide scalability and efficiency. In failure-free scenarios, these frameworks usually achieve good results. However, these ones are not realistic scenarios. Consequently, these frameworks exhibit some fault tolerance and dependability techniques as built-in features. On the other hand, dependability improvements are known to imply additional resource costs. This is reasonable and providers offering these infrastructures are aware of this. Nevertheless, not all the approaches provide the same tradeoff between fault tolerant capabilities (or more generally, reliability capabilities) and cost. In this thesis, we have addressed the coexistence between reliability and resource efficiency in MapReduce-based systems, looking for methodologies that introduce the minimal cost and guarantee an appropriate level of reliability. In order to achieve this, we have proposed: (i) a formalization of a failure detector abstraction; (ii) an alternative solution to single points of failure of these frameworks, and finally (iii) a novel feedback-based resource allocation system at the container level. Finally, our generic contributions have been instantiated for the Hadoop YARN architecture, which is the state-of-the-art framework in the data-intensive computing systems community nowadays. The thesis demonstrates how all our approaches outperform Hadoop YARN in terms of reliability and resource efficiency.
Resumo:
Protein phosphoaspartate bonds play a variety of roles. In response regulator proteins of two-component signal transduction systems, phosphorylation of an aspartate residue is coupled to a change from an inactive to an active conformation. In phosphatases and mutases of the haloacid dehalogenase (HAD) superfamily, phosphoaspartate serves as an intermediate in phosphotransfer reactions, and in P-type ATPases, also members of the HAD family, it serves in the conversion of chemical energy to ion gradients. In each case, lability of the phosphoaspartate linkage has hampered a detailed study of the phosphorylated form. For response regulators, this difficulty was recently overcome with a phosphate analog, BeF\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*}{\mathrm{_{3}^{-}}}\end{equation*}\end{document}, which yields persistent complexes with the active site aspartate of their receiver domains. We now extend the application of this analog to a HAD superfamily member by solving at 1.5-Å resolution the x-ray crystal structure of the complex of BeF\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*}{\mathrm{_{3}^{-}}}\end{equation*}\end{document} with phosphoserine phosphatase (PSP) from Methanococcus jannaschii. The structure is comparable to that of a phosphoenzyme intermediate: BeF\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*}{\mathrm{_{3}^{-}}}\end{equation*}\end{document} is bound to Asp-11 with the tetrahedral geometry of a phosphoryl group, is coordinated to Mg2+, and is bound to residues surrounding the active site that are conserved in the HAD superfamily. Comparison of the active sites of BeF\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*}{\mathrm{_{3}^{-}}}\end{equation*}\end{document}⋅PSP and BeF\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*}{\mathrm{_{3}^{-}}}\end{equation*}\end{document}⋅CeY, a receiver domain/response regulator, reveals striking similarities that provide insights into the function not only of PSP but also of P-type ATPases. Our results indicate that use of BeF\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*}{\mathrm{_{3}^{-}}}\end{equation*}\end{document} for structural studies of proteins that form phosphoaspartate linkages will extend well beyond response regulators.
Resumo:
This paper presents a multilayered architecture that enhances the capabilities of current QA systems and allows different types of complex questions or queries to be processed. The answers to these questions need to be gathered from factual information scattered throughout different documents. Specifically, we designed a specialized layer to process the different types of temporal questions. Complex temporal questions are first decomposed into simple questions, according to the temporal relations expressed in the original question. In the same way, the answers to the resulting simple questions are recomposed, fulfilling the temporal restrictions of the original complex question. A novel aspect of this approach resides in the decomposition which uses a minimal quantity of resources, with the final aim of obtaining a portable platform that is easily extensible to other languages. In this paper we also present a methodology for evaluation of the decomposition of the questions as well as the ability of the implemented temporal layer to perform at a multilingual level. The temporal layer was first performed for English, then evaluated and compared with: a) a general purpose QA system (F-measure 65.47% for QA plus English temporal layer vs. 38.01% for the general QA system), and b) a well-known QA system. Much better results were obtained for temporal questions with the multilayered system. This system was therefore extended to Spanish and very good results were again obtained in the evaluation (F-measure 40.36% for QA plus Spanish temporal layer vs. 22.94% for the general QA system).
Resumo:
In this article, a new methodology is presented to obtain representation models for a priori relation z = u(x1, x2, . . . ,xn) (1), with a known an experimental dataset zi; x1i ; x2i ; x3i ; . . . ; xni i=1;2;...;p· In this methodology, a potential energy is initially defined over each possible model for the relationship (1), what allows the application of the Lagrangian mechanics to the derived system. The solution of the Euler–Lagrange in this system allows obtaining the optimal solution according to the minimal action principle. The defined Lagrangian, corresponds to a continuous medium, where a n-dimensional finite elements model has been applied, so it is possible to get a solution for the problem solving a compatible and determined linear symmetric equation system. The computational implementation of the methodology has resulted in an improvement in the process of get representation models obtained and published previously by the authors.
Resumo:
For leased equipment, the lessor carries out the maintenance of the equipment. Usually, the contract of lease specifies the penalty for equipment failures and for repairs not being carried out within specified time limits. This implies that optimal preventive maintenance policies must take these penalty costs into account and properly traded against the cost of preventive maintenance actions. The costs associated with failures are high as unplanned corrective maintenance actions are costly and the resulting penalties due to lease contract terms being violated. The paper develops a model to determine the optimal parameters of a preventive maintenance policy that takes into account all these costs to minimize the total expected cost to the lessor for new item lease. The parameters of the policy are (i) the number of preventive maintenance actions to be carried out over the lease period, (ii) the time instants for such actions, and (iii) the level of action. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
This paper proposes an architecture for pervasive computing which utilizes context information to provide adaptations based on vertical handovers (handovers between heterogeneous networks) while supporting application Quality of Service (QoS). The future of mobile computing will see an increase in ubiquitous network connectivity which allows users to roam freely between heterogeneous networks. One of the requirements for pervasive computing is to adapt computing applications or their environment if current applications can no longer be provided with the requested QoS. One of possible adaptations is a vertical handover to a different network. Vertical handover operations include changing network interfaces on a single device or changes between different devices. Such handovers should be performed with minimal user distraction and minimal violation of communication QoS for user applications. The solution utilises context information regarding user devices, user location, application requirements, and network environment. The paper shows how vertical handover adaptations are incorporated into the whole infrastructure of a pervasive system
Resumo:
This thesis address the creation of fibre Bragg grating based sensors and the fabrication systems which are used to manufacture them. The information is presented primarily with experimental evidence, backed up with the current theoretical concepts. The issues involved in fabricating high quality fibre Bragg gratings are systematically investigated. Sources of errors in the manufacturing processes are detected, analysed and reduced to allow higher quality gratings to be fabricated. The use of chirped Moiré gratings as distributed sensors is explored, the spatial resolution is increased beyond that of any previous work and the use of the gratings as distributed load sensors is also presented. Chirped fibre Bragg gratings are shown to be capable of operating as in-situ wear sensors, capable of accurately measuring the wear or erosion of the surface of a material. Two methods of measuring the wear are compared, giving a comparison between an expensive high resolution method and a cheap lower resolution method. The wear sensor is also shown to be capable of measuring the physical size and location of damage induced on the surface of a material. An array method is demonstrated to provide a high survivability such that the array may be damaged yet operate with minimal degradation in performance.
Resumo:
Liposome systems are well reported for their activity as vaccine adjuvants; however novel lipid-based microbubbles have also been reported to enhance the targeting of antigens into dendritic cells (DCs) in cancer immunotherapy (Suzuki et al 2009). This research initially focused on the formulation of gas-filled lipid coated microbubbles and their potential activation of macrophages using in vitro models. Further studies in the thesis concentrated on aqueous-filled liposomes as vaccine delivery systems. Initial work involved formulating and characterising four different methods of producing lipid-coated microbubbles (sometimes referred to as gas-filled liposomes), by homogenisation, sonication, a gas-releasing chemical reaction and agitation/pressurisation in terms of stability and physico-chemical characteristics. Two of the preparations were tested as pressure probes in MRI studies. The first preparation composed of a standard phospholipid (DSPC) filled with air or nitrogen (N2), whilst in the second method the microbubbles were composed of a fluorinated phospholipid (F-GPC) filled with a fluorocarbon saturated gas. The studies showed that whilst maintaining high sensitivity, a novel contrast agent which allows stable MRI measurements of fluid pressure over time, could be produced using lipid-coated microbubbles. The F-GPC microbubbles were found to withstand pressures up to 2.6 bar with minimal damage as opposed to the DSPC microbubbles, which were damaged at above 1.3 bar. However, it was also found that DSPC-filled with N2 microbubbles were also extremely robust to pressure and their performance was similar to that of F-GPC based microbubbles. Following on from the MRI studies, the DSPC-air and N2 filled lipid-based microbubbles were assessed for their potential activation of macrophages using in vitro models and compared to equivalent aqueous-filled liposomes. The microbubble formulations did not stimulate macrophage uptake, so studies thereafter focused on aqueous-filled liposomes. Further studies concentrated on formulating and characterising, both physico-chemically and immunologically, cationic liposomes based on the potent adjuvant dimethyldioctadecylammonium (DDA) and immunomodulatory trehalose dibehenate (TDB) with the addition of polyethylene glycol (PEG). One of the proposed hypotheses for the mechanism behind the immunostimulatory effect obtained with DDA:TDB is the ‘depot effect’ in which the liposomal carrier helps to retain the antigen at the injection site thereby increasing the time of vaccine exposure to the immune cells. The depot effect has been suggested to be primarily due to their cationic nature. Results reported within this thesis demonstrate that higher levels of PEG i.e. 25 % were able to significantly inhibit the formation of a liposome depot at the injection site and also severely limit the retention of antigen at the site. This therefore resulted in a faster drainage of the liposomes from the site of injection. The versatility of cationic liposomes based on DDA:TDB in combination with different immunostimulatory ligands including, polyinosinic-polycytidylic acid (poly (I:C), TLR 3 ligand), and CpG (TLR 9 ligand) either entrapped within the vesicles or adsorbed onto the liposome surface was investigated for immunogenic capacity as vaccine adjuvants. Small unilamellar (SUV) DDA:TDB vesicles (20-100 nm native size) with protein antigen adsorbed to the vesicle surface were the most potent in inducing both T cell (7-fold increase) and antibody (up to 2 log increase) antigen specific responses. The addition of TLR agonists poly(I:C) and CpG to SUV liposomes had small or no effect on their adjuvanticity. Finally, threitol ceramide (ThrCer), a new mmunostimulatory agent, was incorporated into the bilayers of liposomes composed of DDA or DSPC to investigate the uptake of ThrCer, by dendritic cells (DCs), and presentation on CD1d molecules to invariant natural killer T cells. These systems were prepared both as multilamellar vesicles (MLV) and Small unilamellar (SUV). It was demonstrated that the IFN-g secretion was higher for DDA SUV liposome formulation (p<0.05), suggesting that ThrCer encapsulation in this liposome formulation resulted in a higher uptake by DCs.
Resumo:
Methodologies for understanding business processes and their information systems (IS) are often criticized, either for being too imprecise and philosophical (a criticism often levied at softer methodologies) or too hierarchical and mechanistic (levied at harder methodologies). The process-oriented holonic modelling methodology combines aspects of softer and harder approaches to aid modellers in designing business processes and associated IS. The methodology uses holistic thinking and a construct known as the holon to build process descriptions into a set of models known as a holarchy. This paper describes the methodology through an action research case study based in a large design and manufacturing organization. The scientific contribution is a methodology for analysing business processes in environments that are characterized by high complexity, low volume and high variety where there are minimal repeated learning opportunities, such as large IS development projects. The practical deliverables from the project gave IS and business process improvements for the case study company.
Resumo:
AMS subject classification: 49N35,49N55,65Lxx.
Resumo:
There are situations in which it is very important to quickly and positively identify an individual. Examples include suspects detained in the neighborhood of a bombing or terrorist incident, individuals detained attempting to enter or leave the country, and victims of mass disasters. Systems utilized for these purposes must be fast, portable, and easy to maintain. The goal of this project was to develop an ultra fast, direct PCR method for forensic genotyping of oral swabs. The procedure developed eliminates the need for cellular digestion and extraction of the sample by performing those steps in the PCR tube itself. Then, special high-speed polymerases are added which are capable of amplifying a newly developed 7 loci multiplex in under 16 minutes. Following the amplification, a postage stamp sized microfluidic device equipped with specially designed entangled polymer separation matrix, yields a complete genotype in 80 seconds. The entire process is rapid and reliable, reducing the time from sample to genotype from 1-2 days to under 20 minutes. Operation requires minimal equipment and can be easily performed with a small high-speed thermal-cycler, reagents, and a microfluidic device with a laptop. The system was optimized and validated using a number of test parameters and a small test population. The overall precision was better than 0.17 bp and provided a power of discrimination greater than 1 in 106. The small footprint, and ease of use will permit this system to be an effective tool to quickly screen and identify individuals detained at ports of entry, police stations and remote locations. The system is robust, portable and demonstrates to the forensic community a simple solution to the problem of rapid determination of genetic identity.
Resumo:
Acknowledgement We wish to acknowledge A. Pikovsky and M. Zaks for useful discussions. This work has been financially supported by the EU project COSMOS (642563).
Resumo:
We wish to acknowledge A Pikovsky and M Zaks for useful discussions. This work has been financially supported by the EU project COSMOS (642563).
Resumo:
Ring opening metathesis polymerization (ROMP) is a variant of olefin metathesis used to polymerize strained cyclic olefins. Ruthenium-based Grubbs’ catalysts are widely used in ROMP to produce industrially important products. While highly efficient in organic solvents such as dichloromethane and toluene, these hydrophobic catalysts are not typically applied in aqueous systems. With the advancements in emulsion and miniemulsion polymerization, it is promising to conduct ROMP in an aqueous dispersed phase to generate well-defined latex nanoparticles while improving heat transfer and reducing the use of volatile organic solvents (VOCs). Herein I report the efforts made using a PEGylated ruthenium alkylidene as the catalyst to initiate ROMP in an oil-in-water miniemulsion. 1H NMR revealed that the synthesized PEGylated catalyst was stable and reactive in water. Using 1,5-cyclooctadiene (COD) as monomer, we showed the highly efficient catalyst yielded colloidally stable polymer latexes with ~ 100% conversion at room temperature. Kinetic studies demonstrated first-order kinetics with good livingness as confirmed by the shift of gel permeation chromatography (GPC) traces. Depending on the surfactants used, the particle sizes ranged from 100 to 300 nm with monomodal distributions. The more strained cyclic olefin norbornene (NB) could also be efficiently polymerized with a PEGylated ruthenium alkylidene in miniemulsion to full conversion and with minimal coagulum formation.