923 resultados para automated full waveform logging system
Resumo:
Precise measurements were conducted in continuous flow seawater mesocosms located in full sunlight that compared metabolic response of coral, coral-macroalgae and macroalgae systems over a diurnal cycle. Irradiance controlled net photosynthesis (Pnet), which in turn drove net calcification (Gnet), and altered pH. Pnet exerted the dominant control on [CO3]2- and aragonite saturation state (Omega arag) over the diel cycle. Dark calcification rate decreased after sunset, reaching zero near midnight followed by an increasing rate that peaked at 03:00 h. Changes in Omega arag and pH lagged behind Gnet throughout the daily cycle by two or more hours. The flux rate Pnet was the primary driver of calcification. Daytime coral metabolism rapidly removes dissolved inorganic carbon (DIC) from the bulk seawater and photosynthesis provides the energy that drives Gnet while increasing the bulk water pH. These relationships result in a correlation between Gnet and Omega arag, with Omega arag as the dependent variable. High rates of H+ efflux continued for several hours following mid-day peak Gnet suggesting that corals have difficulty in shedding waste protons as described by the Proton Flux Hypothesis. DIC flux (uptake) followed Pnet and Gnet and dropped off rapidly following peak Pnet and peak Gnet indicating that corals can cope more effectively with the problem of limited DIC supply compared to the problem of eliminating H+. Over a 24 h period the plot of total alkalinity (AT) versus DIC as well as the plot of Gnet versus Omega arag revealed a circular hysteresis pattern over the diel cycle in the coral and coral-algae mesocosms, but not the macroalgae mesocosm. Presence of macroalgae did not change Gnet of the corals, but altered the relationship between Omega arag and Gnet. Predictive models of how future global changes will effect coral growth that are based on oceanic Omega arag must include the influence of future localized Pnet on Gnet and changes in rate of reef carbonate dissolution. The correlation between Omega arag and Gnet over the diel cycle is simply the response of the CO2-carbonate system to increased pH as photosynthesis shifts the equilibria and increases the [CO3]2- relative to the other DIC components of [HCO3]- and [CO2]. Therefore Omega arag closely tracked pH as an effect of changes in Pnet, which also drove changes in Gnet. Measurements of DIC flux and H+ flux are far more useful than concentrations in describing coral metabolism dynamics. Coral reefs are systems that exist in constant disequilibrium with the water column.
Resumo:
The Tara Oceans Expedition (2009-2013) sampled the world oceans on board a 36 m long schooner, collecting environmental data and organisms from viruses to planktonic metazoans for later analyses using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data set provides continuous measurements made with a WETLabs Eco-FL sensor mounted on the flowthrough system between June 4th, 2011 and March 30th, 2012. Data was recorded approximately every 10s. Two issues affected the data: 1. Periods when the water 0.2µm filtered water were used as blanks and 2. Periods where fluorescence was affected by non-photochemical quenching (NPQ, chlorophyll fluorescence is reduced when cells are exposed to light, e.g. Falkowski and Raven, 1997). Median data and their standard deviation were binned to 5min bins with period of light/dark indicated by an added variable (so that NPQ affected data could be neglected if the user so chooses). Data was first calibrated using HPLC data collected on the Tara (there were 36 data within 30min of each other). Fewer were available when there was no evident NPQ and the resulting scale factor was 0.0106 mg Chl m-3/count. To increase the calibration match-ups we used the AC-S data which provided a robust estimate of Chlorophyll (e.g. Boss et al., 2013). Scale factor computed over a much larger range of values than HPLC was 0.0088 mg Chl m-3/count (compared to 0.0079 mg Chl m-3/count based on manufacturer). In the archived data the fluorometer data is merged with the TSG, raw data is provided as well as manufacturer calibration constants, blank computed from filtered measurements and chlorophyll calibrated using the AC-S. For a full description of the processing of the Eco-FL please see Taillandier, 2015.
Resumo:
The Tara Oceans Expedition (2009-2013) sampled the world oceans on board a 36 m long schooner, collecting environmental data and organisms from viruses to planktonic metazoans for later analyses using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data set provides continuous measurements of partial pressure of carbon dioxide (pCO2), using a ProOceanus CO2-Pro instrument mounted on the flowthrough system. This automatic sensor is fitted with an equilibrator made of gas permeable silicone membrane and an internal detection loop with a non-dispersive infrared detector of PPSystems SBA-4 CO2 analyzer. A zero-CO2 baseline is provided for the subsequent measurements circulating the internal gas through a CO2 absorption chamber containing soda lime or Ascarite. The frequency of this automatic zero point calibration was set to be 24 hours. All data recorded during zeroing processes were discarded with the 15-minute data after each calibration. The output of CO2-Pro is the mole fraction of CO2 in the measured water and the pCO2 is obtained using the measured total pressure of the internal wet gas. The fugacity of CO2 (fCO2) in the surface seawater, whose difference with the atmospheric CO2 fugacity is proportional to the air-sea CO2 fluxes, is obtained by correcting the pCO2 for non-ideal CO2 gas concentration according to Weiss (1974). The fCO2 computed using CO2-Pro measurements was corrected to the sea surface condition by considering the temperature effect on fCO2 (Takahashi et al., 1993). The surface seawater observations that were initially estimated with a 15 seconds frequency were averaged every 5-min cycle. The performance of CO2-Pro was adjusted by comparing the sensor outputs against the thermodynamic carbonate calculation of pCO2 using the carbonic system constants of Millero et al. (2006) from the determinations of total inorganic carbon (CT ) and total alkalinity (AT ) in discrete samples collected at sea surface. AT was determined using an automated open cell potentiometric titration (Haraldsson et al. 1997). CT was determined with an automated coulometric titration (Johnson et al. 1985; 1987), using the MIDSOMMA system (Mintrop, 2005). fCO2 data are flagged according to the WOCE guidelines following Pierrot et al. (2009) identifying recommended values and questionable measurements giving additional information about the reasons of the questionability.
Resumo:
Many specialists in international trade have started saying that the era of a mega FTA is approaching. If the three poles of the global economy, namely East Asia, EU and the United States, form mega FTAs, most of the volume of global trade will be covered. That may be fine, but there will be many countries left out of the mega FTA, most of which will be the least developed countries (LDCs). Since the inception of the Doha Development Agenda (DDA) negotiations in 2001, the WTO and its member countries have tried to include LDCs in the world trading system through various means, including DFQF and AfT. Although these means have some positive impact on the economic development of LDCs, most of the LDCs will never feel comfortable with the current world trading system. To overcome the stalemate in the DDA and to create an inclusive world trading system, we need more commitment from both LDCs and non-LDCs. To surmount the prolonged stalemate in the DDA, we should understand how ordinary people in LDCs feel and think about the current world trading system. Those voices have seldom been listened to, even by the decision makers of their own countries. So as to understand the situation of the people in LDCs, IDE-JETRO carried out several research projects using macro, meso and micro approaches. For the micro level, we collected and analyzed statements from ordinary people concerning their opinions about the world trading system. The interviewees are ordinary people such as street vendors, farmers and factory workers. We asked about where they buy and sell daily necessities, their perception of imported goods, export promotion and free trade at large, etc. These ‘voices of the people’ surveys were conducted in Madagascar and Cambodia during 2013. Based on this research, and especially the findings from the ‘voices of the people’ surveys, we propose a ‘DDA-MDGs hybrid’ strategy to conclude DDA negotiations and develop a more inclusive and a little bit more ethical world trading system. Our proposal may be summarized in the following three points. (1) Aid for Trade (AfT) ver. 2 Currently AfT is mainly focused on coordinating several aid projects related to LDCs’ capacity building. However, this is inadequate; for the proposed ‘DDA-MDGs hybrid’, a super AfT is needed. The WTO, other development agencies and LDC governments will not only coordinate but also plan together aid projects for trade capacity building. AfT ver. 2 includes infrastructure projects either gran aid, ODA loans and private investment. This is in accordance with the post-MDGs argument which emphasizes the role of the private sector. (2) Ethical Attitude Reciprocity is a principle of multilateral agreement, and it has been a core promise since GATT. However, for designing an inclusive system, special and differential treatment (S&D) is still needed for disadvantaged members. To compromise full reciprocity and less than full reciprocity, an ethical attitude on the part of every member is needed in which every member refrains from insisting on the full rights and demands of its own country. As used herein, the term ‘ethical’ implies more consideration for LDCs, and it is almost identical to S&D but with a more positive attitude from developed countries (super S&D). (3) Collect Voices of the People In order to grasp the real situation of the people, the voices of the people on free trade will continue to be collected in other LDCs, and the findings and leanings will be fed back to the WTO negotiation space.
Resumo:
A study on the manoeuvrability of a riverine support patrol vessel is made to derive a mathematical model and simulate maneuvers with this ship. The vessel is mainly characterized by both its wide-beam and the unconventional propulsion system, that is, a pump-jet type azimuthal propulsion. By processing experimental data and the ship characteristics with diverse formulae to find the proper hydrodynamic coefficients and propulsion forces, a system of three differential equations is completed and tuned to carry out simulations of the turning test. The simulation is able to accept variable speed, jet angle and water depth as input parameters and its output consists of time series of the state variables and a plot of the simulated path and heading of the ship during the maneuver. Thanks to the data of full-scale trials previously performed with the studied vessel, a process of validation was made, which shows a good fit between simulated and full-scale experimental results, especially on the turning diameter
Resumo:
Methodology and results of full scale maneuvering trials for Riverine Support Patrol Vessel “RSPV”, built by COTECMAR for the Colombian Navy are presented. !is ship is equipped with a “Pump – Jet” propulsion system and the hull corresponds to a wide-hull with a high Beam – Draft ratio (B/T=9.5). Tests were based on the results of simulation of turning diameters obtained from TRIBON M3© design software, applying techniques of Design of Experiments “DOE”, to rationalize the number of runs in di"erent conditions of water depth, ship speed, and rudder angle. Results validate the excellent performance of this class of ship and show that turning diameter and other maneuvering characteristics improve with decreasing water depth.
Resumo:
In this paper we present a heterogeneous collaborative sensor network for electrical management in the residential sector. Improving demand-side management is very important in distributed energy generation applications. Sensing and control are the foundations of the “Smart Grid” which is the future of large-scale energy management. The system presented in this paper has been developed on a self-sufficient solar house called “MagicBox” equipped with grid connection, PV generation, lead-acid batteries, controllable appliances and smart metering. Therefore, there is a large number of energy variables to be monitored that allow us to precisely manage the energy performance of the house by means of collaborative sensors. The experimental results, performed on a real house, demonstrate the feasibility of the proposed collaborative system to reduce the consumption of electrical power and to increase energy efficiency.
Resumo:
An important competence of human data analysts is to interpret and explain the meaning of the results of data analysis to end-users. However, existing automatic solutions for intelligent data analysis provide limited help to interpret and communicate information to non-expert users. In this paper we present a general approach to generating explanatory descriptions about the meaning of quantitative sensor data. We propose a type of web application: a virtual newspaper with automatically generated news stories that describe the meaning of sensor data. This solution integrates a variety of techniques from intelligent data analysis into a web-based multimedia presentation system. We validated our approach in a real world problem and demonstrate its generality using data sets from several domains. Our experience shows that this solution can facilitate the use of sensor data by general users and, therefore, can increase the utility of sensor network infrastructures.
Resumo:
This paper describes the multi-agent organization of a computer system that was designed to assist operators in decision making in the presence of emergencies. The application was developed for the case of emergencies caused by river floods. It operates on real-time receiving data recorded by sensors (rainfall, water levels, flows, etc.) and applies multi-agent techniques to interpret the data, predict the future behavior and recommend control actions. The system includes an advanced knowledge based architecture with multiple symbolic representation with uncertainty models (bayesian networks). This system has been applied and validated at two particular sites in Spain (the Jucar basin and the South basin).
Resumo:
The integration of powerful partial evaluation methods into practical compilers for logic programs is still far from reality. This is related both to 1) efficiency issues and to 2) the complications of dealing with practical programs. Regarding efnciency, the most successful unfolding rules used nowadays are based on structural orders applied over (covering) ancestors, i.e., a subsequence of the atoms selected during a derivation. Unfortunately, maintaining the structure of the ancestor relation during unfolding introduces significant overhead. We propose an efficient, practical local unfolding rule based on the notion of covering ancestors which can be used in combination with any structural order and allows a stack-based implementation without losing any opportunities for specialization. Regarding the second issue, we propose assertion-based techniques which allow our approach to deal with real programs that include (Prolog) built-ins and external predicates in a very extensible manner. Finally, we report on our implementation of these techniques in a practical partial evaluator, embedded in a state of the art compiler which uses global analysis extensively (the Ciao compiler and, specifically, its preprocessor CiaoPP). The performance analysis of the resulting system shows that our techniques, in addition to dealing with practical programs, are also significantly more efficient in time and somewhat more efficient in memory than traditional tree-based implementations.
Resumo:
In recent years a lot of research has been invested in parallel processing of numerical applications. However, parallel processing of Symbolic and AI applications has received less attention. This paper presents a system for parallel symbolic computitig, narned ACE, based on the logic programming paradigm. ACE is a computational model for the full Prolog language, capable of exploiting Or-parall< lism and Independent And-parallelism. In this paper vve focus on the implementation of the and-parallel part of the ACE system (ralled &ACE) on a shared memory multiprocessor, d< scribing its organization, some optimizations, and presenting some performance figures, proving the abilhy of &ACE to efficiently exploit parallelism.
Resumo:
We describe lpdoc, a tool which generates documentation manuals automatically from one or more logic program source files, written in ISO-Prolog, Ciao, and other (C)LP languages. It is particularly useful for documenting library modules, for which it automatically generates a rich description of the module interface. However, it can also be used quite successfully to document full applications. A fundamental advantage of using lpdoc is that it helps maintaining a true correspondence between the program and its documentation, and also identifying precisely to what version of the program a given printed manual corresponds. The quality of the documentation generated can be greatly enhanced by including within the program text assertions (declarations with types, modes, etc.) for the predicates in the program, and machine-readable comments. One of the main novelties of lpdoc is that these assertions and comments are written using the Ciao system assertion language, which is also the language of communication between the compiler and the user and between the components of the compiler. This allows a significant synergy among specification, documentation, optimization, etc. A simple compatibility library allows conventional (C)LP systems to ignore these assertions and comments and treat normally programs documented in this way. The documentation can be generated in many formats including texinfo, dvi, ps, pdf, info, html/css, Unix nroff/man, Windows help, etc., and can include bibliographic citations and images. lpdoc can also generate “man” pages (Unix man page format), nicely formatted plain ascii “readme” files, installation scripts useful when the manuals are included in software distributions, brief descriptions in html/css or info formats suitable for inclusion in on-line indices of manuals, and even complete WWW and info sites containing on-line catalogs of documents and software distributions. The lpdoc manual, all other Ciao system manuals, and parts of this paper are generated by lpdoc.
Resumo:
Métrica de calidad de video de alta definición construida a partir de ratios de referencia completa. La medida de calidad de video, en inglés Visual Quality Assessment (VQA), es uno de los mayores retos por solucionar en el entorno multimedia. La calidad de vídeo tiene un impacto altísimo en la percepción del usuario final (consumidor) de los servicios sustentados en la provisión de contenidos multimedia y, por tanto, factor clave en la valoración del nuevo paradigma denominado Calidad de la Experiencia, en inglés Quality of Experience (QoE). Los modelos de medida de calidad de vídeo se pueden agrupar en varias ramas según la base técnica que sustenta el sistema de medida, destacando en importancia los que emplean modelos psicovisuales orientados a reproducir las características del sistema visual humano, en inglés Human Visual System, del que toman sus siglas HVS, y los que, por el contrario, optan por una aproximación ingenieril en la que el cálculo de calidad está basado en la extracción de parámetros intrínsecos de la imagen y su comparación. A pesar de los avances recogidos en este campo en los últimos años, la investigación en métricas de calidad de vídeo, tanto en presencia de referencia (los modelos denominados de referencia completa), como en presencia de parte de ella (modelos de referencia reducida) e incluso los que trabajan en ausencia de la misma (denominados sin referencia), tiene un amplio camino de mejora y objetivos por alcanzar. Dentro de ellos, la medida de señales de alta definición, especialmente las utilizadas en las primeras etapas de la cadena de valor que son de muy alta calidad, son de especial interés por su influencia en la calidad final del servicio y no existen modelos fiables de medida en la actualidad. Esta tesis doctoral presenta un modelo de medida de calidad de referencia completa que hemos llamado PARMENIA (PArallel Ratios MEtric from iNtrInsic features Analysis), basado en la ponderación de cuatro ratios de calidad calculados a partir de características intrínsecas de la imagen. Son: El Ratio de Fidelidad, calculado mediante el gradiente morfológico o gradiente de Beucher. El Ratio de Similitud Visual, calculado mediante los puntos visualmente significativos de la imagen a través de filtrados locales de contraste. El Ratio de Nitidez, que procede de la extracción del estadístico de textura de Haralick contraste. El Ratio de Complejidad, obtenido de la definición de homogeneidad del conjunto de estadísticos de textura de Haralick PARMENIA presenta como novedad la utilización de la morfología matemática y estadísticos de Haralick como base de una métrica de medida de calidad, pues esas técnicas han estado tradicionalmente más ligadas a la teledetección y la segmentación de objetos. Además, la aproximación de la métrica como un conjunto ponderado de ratios es igualmente novedosa debido a que se alimenta de modelos de similitud estructural y otros más clásicos, basados en la perceptibilidad del error generado por la degradación de la señal asociada a la compresión. PARMENIA presenta resultados con una altísima correlación con las valoraciones MOS procedentes de las pruebas subjetivas a usuarios que se han realizado para la validación de la misma. El corpus de trabajo seleccionado procede de conjuntos de secuencias validados internacionalmente, de modo que los resultados aportados sean de la máxima calidad y el máximo rigor posible. La metodología de trabajo seguida ha consistido en la generación de un conjunto de secuencias de prueba de distintas calidades a través de la codificación con distintos escalones de cuantificación, la obtención de las valoraciones subjetivas de las mismas a través de pruebas subjetivas de calidad (basadas en la recomendación de la Unión Internacional de Telecomunicaciones BT.500), y la validación mediante el cálculo de la correlación de PARMENIA con estos valores subjetivos, cuantificada a través del coeficiente de correlación de Pearson. Una vez realizada la validación de los ratios y optimizada su influencia en la medida final y su alta correlación con la percepción, se ha realizado una segunda revisión sobre secuencias del hdtv test dataset 1 del Grupo de Expertos de Calidad de Vídeo (VQEG, Video Quality Expert Group) mostrando los resultados obtenidos sus claras ventajas. Abstract Visual Quality Assessment has been so far one of the most intriguing challenges on the media environment. Progressive evolution towards higher resolutions while increasing the quality needed (e.g. high definition and better image quality) aims to redefine models for quality measuring. Given the growing interest in multimedia services delivery, perceptual quality measurement has become a very active area of research. First, in this work, a classification of objective video quality metrics based on their underlying methodologies and approaches for measuring video quality has been introduced to sum up the state of the art. Then, this doctoral thesis describes an enhanced solution for full reference objective quality measurement based on mathematical morphology, texture features and visual similarity information that provides a normalized metric that we have called PARMENIA (PArallel Ratios MEtric from iNtrInsic features Analysis), with a high correlated MOS score. The PARMENIA metric is based on the pooling of different quality ratios that are obtained from three different approaches: Beucher’s gradient, local contrast filtering, and contrast and homogeneity Haralick’s texture features. The metric performance is excellent, and improves the current state of the art by providing a wide dynamic range that make easier to discriminate between very close quality coded sequences, especially for very high bit rates whose quality, currently, is transparent for quality metrics. PARMENIA introduces a degree of novelty against other working metrics: on the one hand, exploits the structural information variation to build the metric’s kernel, but complements the measure with texture information and a ratio of visual meaningful points that is closer to typical error sensitivity based approaches. We would like to point out that PARMENIA approach is the only metric built upon full reference ratios, and using mathematical morphology and texture features (typically used in segmentation) for quality assessment. On the other hand, it gets results with a wide dynamic range that allows measuring the quality of high definition sequences from bit rates of hundreds of Megabits (Mbps) down to typical distribution rates (5-6 Mbps), even streaming rates (1- 2 Mbps). Thus, a direct correlation between PARMENIA and MOS scores are easily constructed. PARMENIA may further enhance the number of available choices in objective quality measurement, especially for very high quality HD materials. All this results come from validation that has been achieved through internationally validated datasets on which subjective tests based on ITU-T BT.500 methodology have been carried out. Pearson correlation coefficient has been calculated to verify the accuracy of PARMENIA and its reliability.