988 resultados para Coolant loops


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Low-temperature (LT) magnetic remanence and hysteresis measurements, in the range 300-5 K, were combined with energy dispersive spectroscopy (EDS) in order to characterize the magnetic inventory of strongly diagenetically altered sediments originating from the Niger deep-sea fan. We demonstrate the possibility of distinguishing between different compositions of members of the magnetite-ulvöspinel and ilmenite-hematite solid solution series on a set of five representative samples, two from the upper suboxic and three from the lower sulfidic anoxic zone of gravity core GeoB 4901. Highly sensitive LT magnetic measurements were performed on magnetic extracts resulting in large differences in the magnetic behavior between samples from the different layers. This emphasizes that both Fe-Ti oxide phases occur in different proportions in the two geochemical environments. Most prominent are variations in the coercivity sensitive parameter coercive field (BC). At room-temperature (RT) hysteresis loops for all extracts are narrow and yield low coercivities (6-13 mT). With decreasing temperature the loops become more pronounced and wider. At 5 K an approximately 5-fold increase in BC for the suboxic samples contrasts a 20-25-fold increase for the samples from the anoxic zone. We demonstrate that this distinct increase in BC at LT corresponds to the increasing proportion of the Ti-rich hemoilmenite phase, while Fe-rich (titano-)magnetite dominates the magnetic signal at RT. This trend is also seen in the room-temperature saturation isothermal remanent magnetization (RT-SIRM) cycles: suboxic samples show remanence curves dominated by Fe-rich mineral phases while anoxic samples display curves clearly dominated by Ti-rich particles. We show that the EDS intensity ratios of the characteristic Fe Kalpha and Ti Kalpha lines of the Fe-Ti oxides may be used to differentiate between members of the magnetite-ulvöspinel and ilmenite-hematite solid solution series. Furthermore it is possible to calculate an approximate composition for each grain if the intensity ratios of natural particles are linked to well-known standards. Thus, element spectra with high Fe/Ti intensity ratios were found to be rather typical of titanomagnetite while low Fe/Ti ratios are indicative of hemoilmenite. The EDS analyses confirm the LT magnetic results, Fe-rich magnetic phases dominate in the upper suboxic environment whereas Ti-rich magnetic phases comprise the majority of particles in the lower anoxic domain: The mineral assemblage of the upper suboxic environments is composed of magnetite (~19%), titanomagnetite (~62%), hemoilmenite (~17%) and ~2% other particles. In the lower anoxic sediments, reductive diagenetic alteration has resulted in more extensive depletion of the (titano-)magnetite phase, resulting in a relative enrichment of the hemoilmenite phase (~66%). In these strongly anoxic sediments stoichiometric magnetite is barely preserved and only ~5% titanomagnetite was detected. The remaining ~28% comprises Ti-rich particles such as pseudobrookite or rutile.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ROV operations had three objectives: (1) to check, whether the "Cherokee" system is suited for advanced benthological work in the high latitude Antarctic shelf areas; (2) to support the disturbance experiment, providing immediate visual Information; (3) to continue ecological work that started in 1989 at the hilltop situated at the northern margin of the Norsel Bank off the 4-Seasons Inlet (Weddell Sea). The "Cherokee" is was equipped with 3 video cameras, 2 of which support the operation. A high resolution Tritech Typhoon camera is used for scientific observations to be recorded. In addition, the ROV has a manipulator, a still camera, lights and strobe, compass, 2 lasers, a Posidonia transponder and an obstacle avoidance Sonar. The size of the vehicle is 160 X 90 X 90cm. In the present configuration without TMS (tether management system) the deployment has to start with paying out the full cable length, lay it in loops on deck and connect the glass fibres at the tether's spool winch. After a final technical check the vehicle is deployed into the water, actively driven perpendicular to the ship's axis and floatings are fixed to the tether. At a cable length of approx. 50 m, the tether is tightened to the depressor by several cable ties and both components are lowered towards the sea floor, the vehicle by the thruster's propulsion and the depressor by the ship's winch. At 5 m intervals the tether has to be tied to the single conductor cable. In good weather conditions the instruments supporting the navigation of the ROV, especially the Posidonia system, allow an operation mode to follow the ship's course if the ship's speed is slow. Together with the lasers which act as a scale in the images they also allow a reproducible scientific analysis since the transect can be plotted in a GIS system. Consequently, the area observed can be easily calculated. An operation as a predominantly drifting system, especially in areas with bottom near currents, is also possible, however, the connection of the tether at the rear of the vehicle is unsuitable for such conditions. The recovery of the system corresponds to that of the deployment. Most important is to reach the surface of the sea at a safe distance perpendicular to the ship's axis in order not to interfere with the ship's propellers. During this phase the Posidonia transponder system is of high relevance although it has to be switched off at a water depth of approx. 40 m. The minimum personal needed is 4 persons to handle the tether on deck, one person to operate the ship's winch, one pilot and one additional technician for the ROV's operation itself, one scientist, and one person on the ship's bridge in addition to one on deck for whale watching when the Posidonia system is in use. The time for the deployment of the ROV until it reaches the sea floor depends on the water depth and consequently on the length of the cable to be paid out beforehand and to be tightened to the single conductor cable. Deployment and recovery at intermediate water depths can last up to 2 hours each. A reasonable time for benthological observations close to the sea floor is 1 to 3 hours but can be extended if scientifically justified. Preliminary results: after a first test station, the ROV was deployed 3 times for observations related to the disturbance experiment. A first attempt to Cross the hilltop at the northern margin of the Norsel Bank close to the 4- Seasons Inlet was successful only for the first hundreds of metres transect length. The benthic community was dominated in biomass by the demosponge Cinachyra barbata. Due to the strong current of approx. 1 nm/h, the design of the system, and an expected more difficult current regime between grounded icebergs and the top of the hilltop the operation was stopped before the hilltop was reached. In a second attempt the hilltop was successfully crossed because the current and wind situation was much more suitable. In contrast to earlier expeditions with the "sprint" ROV it was the first time that both slopes, the smoother in the northeast and the steeper in the southwest were continuously observed during one cast. A coarse classification of the hilltop fauna shows patches dominated by single taxa: cnidarians, hydrozoans, holothurians, sea urchins and stalked sponges. Approximately 20 % of the north-eastern slope was devastated by grounding icebergs. Here the sediments consisted of large boulders, gravel or blocks of finer sediment looking like an irregularly ploughed field. On the Norsel Bank the Cinachyra concentrations were locally associated with high abundances of sea anemones. Total observation time amounted to 11.5 hours corresponding to almost 6-9 km transect length.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The properties of data and activities in business processes can be used to greatly facilítate several relevant tasks performed at design- and run-time, such as fragmentation, compliance checking, or top-down design. Business processes are often described using workflows. We present an approach for mechanically inferring business domain-specific attributes of workflow components (including data Ítems, activities, and elements of sub-workflows), taking as starting point known attributes of workflow inputs and the structure of the workflow. We achieve this by modeling these components as concepts and applying sharing analysis to a Horn clause-based representation of the workflow. The analysis is applicable to workflows featuring complex control and data dependencies, embedded control constructs, such as loops and branches, and embedded component services.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Program specialization optimizes programs for known valúes of the input. It is often the case that the set of possible input valúes is unknown, or this set is infinite. However, a form of specialization can still be performed in such cases by means of abstract interpretation, specialization then being with respect to abstract valúes (substitutions), rather than concrete ones. We study the múltiple specialization of logic programs based on abstract interpretation. This involves in principie, and based on information from global analysis, generating several versions of a program predicate for different uses of such predicate, optimizing these versions, and, finally, producing a new, "multiply specialized" program. While múltiple specialization has received theoretical attention, little previous evidence exists on its practicality. In this paper we report on the incorporation of múltiple specialization in a parallelizing compiler and quantify its effects. A novel approach to the design and implementation of the specialization system is proposed. The resulting implementation techniques result in identical specializations to those of the best previously proposed techniques but require little or no modification of some existing abstract interpreters. Our results show that, using the proposed techniques, the resulting "abstract múltiple specialization" is indeed a relevant technique in practice. In particular, in the parallelizing compiler application, a good number of run-time tests are eliminated and invariants extracted automatically from loops, resulting generally in lower overheads and in several cases in increased speedups.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Precise modeling of the program heap is fundamental for understanding the behavior of a program, and is thus of signiflcant interest for many optimization applications. One of the fundamental properties of the heap that can be used in a range of optimization techniques is the sharing relationships between the elements in an array or collection. If an analysis can determine that the memory locations pointed to by different entries of an array (or collection) are disjoint, then in many cases loops that traverse the array can be vectorized or transformed into a thread-parallel versión. This paper introduces several novel sharing properties over the concrete heap and corresponding abstractions to represent them. In conjunction with an existing shape analysis technique, these abstractions allow us to precisely resolve the sharing relations in a wide range of heap structures (arrays, collections, recursive data structures, composite heap structures) in a computationally efflcient manner. The effectiveness of the approach is evaluated on a set of challenge problems from the JOlden and SPECjvm98 suites. Sharing information obtained from the analysis is used to achieve substantial thread-level parallel speedups.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is very often the case that programs require passing, maintaining, and updating some notion of state. Prolog programs often implement such stateful computations by carrying this state in predicate arguments (or, alternatively, in the internal datábase). This often causes code obfuscation, complicates code reuse, introduces dependencies on the data model, and is prone to incorrect propagation of the state information among predicate calis. To partly solve these problems, we introduce contexts as a consistent mechanism for specifying implicit arguments and its threading in clause goals. We propose a notation and an interpretation for contexts, ranging from single goals to complete programs, give an intuitive semantics, and describe a translation into standard Prolog. We also discuss a particular light-weight implementation in Ciao Prolog, and we show the usefulness of our proposals on a series of examples and applications, including code directiy using contexts, DCGs, extended DCGs, logical loops and other custom control structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Un estudio geofísico mediante resonancia se realiza mediante la excitación del agua del subsuelo a partir de la emisión de una intensidad variable a lo largo de un cable extendido sobre la superficie en forma cuadrada o circular. El volumen investigado depende del tamaño de dicho cable, lo cual, junto con la intensidad utilizada para la excitación del agua determina las diferentes profundidades del terreno de las que se va a extraer información, que se encuentran entre 10 y 100 m, habitualmente. La tesis doctoral presentada consiste en la adaptación del Método de Resonancia Magnética para su utilización en aplicaciones superficiales mediante bucles de tamaño reducido. Dicha información sobre el terreno en la escala desde decímetros a pocos metros es interesante en relación a la física de suelos y en general en relación a diferentes problemas de Ingeniería, tanto de extracción de agua como constructiva. Una vez realizada la revisión del estado de conocimiento actual del método en relación a sus aplicaciones usuales, se estudian los problemas inherentes a su adaptación a medidas superficiales. Para solventar dichos problemas se han considerado dos líneas de investigación principales: En primer lugar se realiza un estudio de la influencia de las características del pulso de excitación emitido por el equipo en la calidad de las medidas obtenidas, y las posibles estrategias para mejorar dicho pulso. El pulso de excitación es un parámetro clave en la extracción de información sobre diferentes profundidades del terreno. Por otro lado se busca la optimización del dispositivo de medida para su adaptación al estudio de los primeros metros del suelo mediante el equipo disponible, tratándose éste del equipo NumisLITE de la casa Iris Instruments. ABSTRACT Magnetic Resonance Sounding is a geophysical method performed through the excitation of the subsurface water by a variable electrical intensity delivered through a wire extended on the surface, forming a circle or a square. The investigated volume depends on the wire length and the intensity used, determining the different subsurface depths reached. In the usual application of the method, this depth ranges between 10 and 100 m. This thesis studies the adaptation of the above method to more superficial applications using smaller wire loops. Information about the subsurface in the range of decimeter to a few meters is interesting regarding physics of soils, as well as different Engineering problems, either for water extraction or for construction. After a review of the nowadays state of the art of the method regarding its usual applications, the special issues attached to its use to perform very shallow measures are studied. In order to sort out these problems two main research lines are considered: On the one hand, a study about the influence of the characteristics of the emitted pulse in the resulting measure quality is performed. Possible strategies in order to improve this pulse are investigated, as the excitation pulse is a key parameter to obtain information from different depths of the subsurface. On the other hand, the study tries to optimize the measurement device to its adaptation to the study of the first meters of the ground with the available instrumentation, the NumisLITE equipment from Iris Instruments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The research in this thesis is related to static cost and termination analysis. Cost analysis aims at estimating the amount of resources that a given program consumes during the execution, and termination analysis aims at proving that the execution of a given program will eventually terminate. These analyses are strongly related, indeed cost analysis techniques heavily rely on techniques developed for termination analysis. Precision, scalability, and applicability are essential in static analysis in general. Precision is related to the quality of the inferred results, scalability to the size of programs that can be analyzed, and applicability to the class of programs that can be handled by the analysis (independently from precision and scalability issues). This thesis addresses these aspects in the context of cost and termination analysis, from both practical and theoretical perspectives. For cost analysis, we concentrate on the problem of solving cost relations (a form of recurrence relations) into closed-form upper and lower bounds, which is the heart of most modern cost analyzers, and also where most of the precision and applicability limitations can be found. We develop tools, and their underlying theoretical foundations, for solving cost relations that overcome the limitations of existing approaches, and demonstrate superiority in both precision and applicability. A unique feature of our techniques is the ability to smoothly handle both lower and upper bounds, by reversing the corresponding notions in the underlying theory. For termination analysis, we study the hardness of the problem of deciding termination for a speci�c form of simple loops that arise in the context of cost analysis. This study gives a better understanding of the (theoretical) limits of scalability and applicability for both termination and cost analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A Steam Generator Tube Rupture (SGTR) in a Pressurized Water Reactor (PWR) can lead to an atmospheric release bypassing the containment via the secondary system and exiting though the Pressurized Operating Relief Valves of the affected Steam Generator. That is why SGTR historically have been treated in a special way in the different Deterministic Safety Analysis (DSA), focusing on the radioactive release more than the possibility of core damage, as it is done in the other Loss of Coolant Accidents(LOCAs).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

High switching frequencies (several MHz) allow the integration of low power DC/DC converters. Although, in theory, a high switching frequency would make possible to implement a conventional Voltage Mode control (VMC) or Peak Current Mode control (PCMC) with very high bandwidth, in practice, parasitic effects and robustness limits the applicability of these control techniques. This paper compares VMC and CMC techniques with the V2IC control. This control is based on two loops. The fast internal loop has information of the output capacitor current and the error voltage, providing fast dynamic response under load and voltage reference steps, while the slow external voltage loop provides accurate steady state regulation. This paper shows the fast dynamic response of the V2IC control under load and output voltage reference steps and its robustness operating with additional output capacitors added by the customer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Switching of a signal beam by another control beam at different wavelength is demonstrated experimentally using the optical bistability occurring in a 1.55 mm-distributed feedback semiconductor optical amplifier (DFBSOA) working in reflection. Counterclockwise (S-shaped) and reverse (clockwise) bistability are observed in the output of the control and the signal beam respectively, as the power of the input control signal is increased. With this technique an optical signal can be set in either of the optical input wavelengths by appropriate choice of the powers of the input signals. The switching properties of the DFBSOA are studied experimentally as the applied bias current is increased from below to above threshold and for different levels of optical power in the signal beam and different wavelength detunings between both input signals. Higher on-off extinction ratios, wider bistable loops and lower input power requirements for switching are obtained when the DFBSOA is operated slightly above its threshold value.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The characteristics of optical bistability in a vertical- cavity semiconductor optical amplifier (VCSOA) operated in reflection are reported. The dependences of the optical bistability in VCSOAs on the initial phase detuning and on the applied bias current are analyzed. The optical bistability is also studied for different numbers of superimposed periods in the top distributed bragg reflector (DBR) that conform the internal cavity of the device. The appearance of the X-bistable and the clockwise bistable loops is predicted theoretically in a VCSOA operated in reflection for the first time, to the best of our knowledge. Moreover, it is also predicted that the control of the VCSOA’s top reflectivity by the addition of new superimposed periods in its top DBR reduces by one order of magnitude the input power needed for the assessment of the X- and the clockwise bistable loop, compared to that required in in-plane semiconductor optical amplifiers. These results, added to the ease of fabricating two-dimensional arrays of this kind of device could be useful for the development of new optical logic or optical signal regeneration devices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Las plantas industriales de exploración y producción de petróleo y gas disponen de numerosos sistemas de comunicación que permiten el correcto funcionamiento de los procesos que tienen lugar en ella así como la seguridad de la propia planta. Para el presente Proyecto Fin de Carrera se ha llevado a cabo el diseño del sistema de megafonía PAGA (Public Address and General Alarm) y del circuito cerrado de televisión (CCTV) en la unidad de procesos Hydrocrcaker encargada del craqueo de hidrógeno. Partiendo de los requisitos definidos por las especificaciones corporativas de los grupos petroleros para ambos sistemas, PAGA y CCTV, se han expuesto los principios teóricos sobre los que se fundamenta cada uno de ellos y las pautas a seguir para el diseño y demostración del buen funcionamiento a partir de software específico. Se ha empleado las siguientes herramientas software: EASE para la simulación acústica, PSpice para la simulación eléctrica de las etapas de amplificación en la megafonía; y JVSG para el diseño de CCTV. La sonorización tanto de las unidades como del resto de instalaciones interiores ha de garantizar la inteligibilidad de los mensajes transmitidos. La realización de una simulación acústica permite conocer cómo va a ser el comportamiento de la megafonía sin necesidad de instalar el sistema, lo cual es muy útil para este tipo de proyectos cuya ingeniería se realiza previamente a la construcción de la planta. Además se comprueba el correcto diseño de las etapas de amplificación basadas en líneas de alta impedancia o de tensión constante (100 V). El circuito cerrado de televisión (CCTV) garantiza la transmisión de señales visuales de todos los accesos a las instalaciones y unidades de la planta así como la visión en tiempo real del correcto funcionamiento de los procesos químicos llevados a cabo en la refinería. El sistema dispone de puestos de control remoto para el manejo y gestión de las cámaras desplegadas; y de un sistema de almacenamiento de las grabaciones en discos duros (RAID-5) a través de una red SAN (Storage Area Network). Se especifican las diferentes fases de un proyecto de ingeniería en el sector de E&P de hidrocarburos entre las que se destaca: propuesta y adquisición, reunión de arranque (KOM, Kick Off Meeting), estudio in situ (Site Survey), plan de proyecto, diseño y documentación, procedimientos de pruebas, instalación, puesta en marcha y aceptaciones del sistema. Se opta por utilizar terminología inglesa dado al ámbito global del sector. En la última parte del proyecto se presenta un presupuesto aproximado de los materiales empleados en el diseño de PAGA y CCTV. ABSTRACT. Integrated communications for Oil and Gas allows reducing risks, improving productivity, reducing costs, and countering threats to safety and security. Both PAGA system (Public Address and General Alarm) and Closed Circuit Television have been designed for this project in order to ensure a reliable security of an oil refinery. Based on the requirements defined by corporate specifications for both systems (PAGA and CCTV), theoretical principles have been presented as well as the guidelines for the design and demonstration of a reliable design. The following software has been used: EASE for acoustic simulation; PSpice for simulation of the megaphony amplification loops; and JVSG tool for CCTV design. Acoustic for both the units and the other indoor facilities must ensure intelligibility of the transmitted messages. An acoustic simulation allows us to know how will be the performance of the PAGA system without installing loudspeakers, which is very useful for this type of project whose engineering is performed prior to the construction of the plant. Furthermore, it has been verified the correct design of the amplifier stages based on high impedance lines or constant voltage (100 V). Closed circuit television (CCTV) ensures the transmission of visual signals of all access to facilities as well as real-time view of the proper functioning of chemical processes carried out at the refinery. The system has remote control stations for the handling and management of deployed cameras. It is also included a storage system of the recordings on hard drives (RAID - 5) through a SAN (Storage Area Network). Phases of an engineering project in Oil and Gas are defined in the current project. It includes: proposal and acquisition, kick-off meeting (KOM), Site Survey, project plan, design and documentation, testing procedures (SAT and FAT), installation, commissioning and acceptance of the systems. Finally, it has been presented an estimate budget of the materials used in the design of PAGA and CCTV.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Solar thermal power plants are usually installed in locations with high yearly average solar radiation, often deserts. In such conditions, cooling water required for thermodynamic cycles is rarely available. Moreover, when solar radiation is high, ambient temperature is very high as well; this leads to excessive condensation temperature, especially when air-condensers are used, and decreases the plant efficiency. However, temperature variation in deserts is often very high, which drives to relatively low temperatures during the night. This fact can be exploited with the use of a closed cooling system, so that the coolant (water) is chilled during the night and store. Chilled water is then used during peak temperature hours to cool the condenser (dry cooling), thus enhancing power output and efficiency. The present work analyzes the performance improvement achieved by night thermal cool storage, compared to its equivalent air cooled power plant. Dry cooling is proved to be energy-effective for moderately high day–night temperature differences (20 °C), often found in desert locations. The storage volume requirement for different power plant efficiencies has also been studied, resulting on an asymptotic tendency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La computación basada en servicios (Service-Oriented Computing, SOC) se estableció como un paradigma ampliamente aceptado para el desarollo de sistemas de software flexibles, distribuidos y adaptables, donde las composiciones de los servicios realizan las tareas más complejas o de nivel más alto, frecuentemente tareas inter-organizativas usando los servicios atómicos u otras composiciones de servicios. En tales sistemas, las propriedades de la calidad de servicio (Quality of Service, QoS), como la rapídez de procesamiento, coste, disponibilidad o seguridad, son críticas para la usabilidad de los servicios o sus composiciones en cualquier aplicación concreta. El análisis de estas propriedades se puede realizarse de una forma más precisa y rica en información si se utilizan las técnicas de análisis de programas, como el análisis de complejidad o de compartición de datos, que son capables de analizar simultáneamente tanto las estructuras de control como las de datos, dependencias y operaciones en una composición. El análisis de coste computacional para la composicion de servicios puede ayudar a una monitorización predictiva así como a una adaptación proactiva a través de una inferencia automática de coste computacional, usando los limites altos y bajos como funciones del valor o del tamaño de los mensajes de entrada. Tales funciones de coste se pueden usar para adaptación en la forma de selección de los candidatos entre los servicios que minimizan el coste total de la composición, basado en los datos reales que se pasan al servicio. Las funciones de coste también pueden ser combinadas con los parámetros extraídos empíricamente desde la infraestructura, para producir las funciones de los límites de QoS sobre los datos de entrada, cuales se pueden usar para previsar, en el momento de invocación, las violaciones de los compromisos al nivel de servicios (Service Level Agreements, SLA) potenciales or inminentes. En las composiciones críticas, una previsión continua de QoS bastante eficaz y precisa se puede basar en el modelado con restricciones de QoS desde la estructura de la composition, datos empiricos en tiempo de ejecución y (cuando estén disponibles) los resultados del análisis de complejidad. Este enfoque se puede aplicar a las orquestaciones de servicios con un control centralizado del flujo, así como a las coreografías con participantes multiples, siguiendo unas interacciones complejas que modifican su estado. El análisis del compartición de datos puede servir de apoyo para acciones de adaptación, como la paralelización, fragmentación y selección de los componentes, las cuales son basadas en dependencias funcionales y en el contenido de información en los mensajes, datos internos y las actividades de la composición, cuando se usan construcciones de control complejas, como bucles, bifurcaciones y flujos anidados. Tanto las dependencias funcionales como el contenido de información (descrito a través de algunos atributos definidos por el usuario) se pueden expresar usando una representación basada en la lógica de primer orden (claúsulas de Horn), y los resultados del análisis se pueden interpretar como modelos conceptuales basados en retículos. ABSTRACT Service-Oriented Computing (SOC) is a widely accepted paradigm for development of flexible, distributed and adaptable software systems, in which service compositions perform more complex, higher-level, often cross-organizational tasks using atomic services or other service compositions. In such systems, Quality of Service (QoS) properties, such as the performance, cost, availability or security, are critical for the usability of services and their compositions in concrete applications. Analysis of these properties can become more precise and richer in information, if it employs program analysis techniques, such as the complexity and sharing analyses, which are able to simultaneously take into account both the control and the data structures, dependencies, and operations in a composition. Computation cost analysis for service composition can support predictive monitoring and proactive adaptation by automatically inferring computation cost using the upper and lower bound functions of value or size of input messages. These cost functions can be used for adaptation by selecting service candidates that minimize total cost of the composition, based on the actual data that is passed to them. The cost functions can also be combined with the empirically collected infrastructural parameters to produce QoS bounds functions of input data that can be used to predict potential or imminent Service Level Agreement (SLA) violations at the moment of invocation. In mission-critical applications, an effective and accurate continuous QoS prediction, based on continuations, can be achieved by constraint modeling of composition QoS based on its structure, known data at runtime, and (when available) the results of complexity analysis. This approach can be applied to service orchestrations with centralized flow control, and choreographies with multiple participants with complex stateful interactions. Sharing analysis can support adaptation actions, such as parallelization, fragmentation, and component selection, which are based on functional dependencies and information content of the composition messages, internal data, and activities, in presence of complex control constructs, such as loops, branches, and sub-workflows. Both the functional dependencies and the information content (described using user-defined attributes) can be expressed using a first-order logic (Horn clause) representation, and the analysis results can be interpreted as a lattice-based conceptual models.