930 resultados para Input-output data
Resumo:
Durante este trabajo se analizarán los impactos económicos, sociales y ambientales que generaría el desarrollo de una Plataforma Logística Multimodal en Puerto Asís, Putumayo, en la región amazónica comprendida por Colombia, Brasil y Ecuador, como opción de salida y entrada de mercancías. Esto con el fin de establecer cuál es la ruta más óptima para el transporte de mercancías hacia el continente asiático. Este proyecto surge como una iniciativa en la constitución de un eje de transporte para la interconexión de los puertos de la región amazónica. Consiste en el establecimiento de infraestructura para vías terrestres y marítimas que agilicen el transporte y reduzcan los altos costos a los que se enfrentan el comercio de la región. Para justificar la viabilidad de la realización este proyecto, es necesario evaluar diferentes impactos que producirían en diferentes ámbitos como los económicos, ambientales y sociales. Para la búsqueda de los impactos se establecen los perfiles actuales de los países vinculados al proyecto de la Plataforma Logística Multimodal en Puerto Asís, Putumayo. Esto con el fin de conocer sus respectivas condiciones actuales y hallar en qué medida se verán alteradas. Posteriormente, se expondrá las circunstancias de infraestructura de esta zona, demostrando los desafíos que exige el desarrollo de este tipo de proyectos en la región amazónica con la intención final de mejorar la infraestructura no solo de este sector sino del país, volviéndolo más competitivo a nivel global. Finalmente se evaluaran los efectos que la construcción de la plataforma generaría justificando su desarrollo.
Resumo:
Comprobar la pertinencia de utilizar un input (los materiales que se presentan al alumno) muy controlado, progresivo y en el que se van introduciendo elementos que van un poco más allá de la realización del alumno siguiendo a Krashen. Analizar el efecto que este input tiene en la expresión escrita libre y comunicativa que es el output (lo que el alumno demuestra que ha aprendido) comprobable y analizable, y que a su vez nos dará la medida del intake (lo que el alumno ha asimilado). Analizar las implicaciones teóricas, lingüísticas y psicolingüísticas de este estudio. 47 alumnos, procedentes de distintas partes de España, que siguen una enseñanza a distancia, mayores de 25 años. 44 alumnos de tercero de BUP que han seguido una enseñanza reglada y profesional durante 6 años. La muestra contiene un total de 1022 realizaciones que se analizan a partir de una ficha informatizada. Se considera fundamental la relación que hay entre el input de cada una de las unidades de Cher Ami y el output del alumno después de 4-8 horas de trabajo, a partir de unos modelos de lecturas. Frente a estas lecturas, los alumnos deben contestar en un texto escrito de 5-10 frases. Se diseña una ficha donde los profesores tutores de los centros participantes transcriben las realizaciones escritas libres de los alumnos en cada una de las 18 unidades. Cada tutor estudia los errores, analiza semántica y pragmáticamente, y señala las realizaciones relevantes. El programa informático compara el input utilizado con el input dado, y elabora listas con el input utilizado, el no utilizado y extrainput (elementos utilizados por el alumno que no figuran en los materiales dados). El equipo investigador unifica criterios, limpia las listas correspondientes y prepara la entrada de datos para el análisis de resultados. Prueba, programa de ordenador. Porcentajes, tablas, estudio contrastivo. Se agrupan en cuatro grandes bloques. 1. Resultados por unidades didácticas que permiten un análisis de contraste con los contenidos y estrategias de cada una de las unidades del método utilizado, y llevan a la modificación de algunos aspectos concretos de dichos materiales. 2. El análisis global, que contiene los análisis de los errores más frecuentes y sus causas. El análisis semántico permite conocer los campos de interés del alumno, con un tipo de realizaciones muy centradas en sí mismos y en su entorno inmediato. 3. El estudio contrastivo con Bachiller ofrece como resultado el hecho sorprendente de que en un año y con una metodología programada se obtienen en el CAD resultados mucho más satisfactorios que en alumnos de bachiller que estudian el idioma de manera presencial por sexto año. 4. En el estudio individual secuencial, aunque los resultados son menos definitivos, se observa un mayor progreso en los alumnos que respetan el input, mientras que los que utilizan elementos de extrainput no sólo cometen más errores, sino que algunos de ellos llegan a verse bloqueados en su propio aprendizaje. Estos dos últimos bloques de resultados prueban la teoría de los autores de la investigación, de una mayor conveniencia del aprendizaje de una microlengua LE acumulativa y controlada, en vez de una interlengua que corre el riesgo de fosilizarse y bloquear al alumno en su proceso de aprendizaje, siempre que se trate de contextos no naturales y con un planteamiento curricular. Estos objetivos comunicativos, que se deben poder medir en su progreso, proporcionan una motivación intrínseca en el alumno que le hace ser partícipe de su aprendizaje y esto contribuirá a una mayor eficacia en el proceso de enseñanza-aprendizaje de los idiomas. Conviene revisar los contenidos y métodos de la enseñanza reglada y presencial.
Resumo:
Aquesta tesi estudia com estimar la distribució de les variables regionalitzades l'espai mostral i l'escala de les quals admeten una estructura d'espai Euclidià. Apliquem el principi del treball en coordenades: triem una base ortonormal, fem estadística sobre les coordenades de les dades, i apliquem els output a la base per tal de recuperar un resultat en el mateix espai original. Aplicant-ho a les variables regionalitzades, obtenim una aproximació única consistent, que generalitza les conegudes propietats de les tècniques de kriging a diversos espais mostrals: dades reals, positives o composicionals (vectors de components positives amb suma constant) són tractades com casos particulars. D'aquesta manera, es generalitza la geostadística lineal, i s'ofereix solucions a coneguts problemes de la no-lineal, tot adaptant la mesura i els criteris de representativitat (i.e., mitjanes) a les dades tractades. L'estimador per a dades positives coincideix amb una mitjana geomètrica ponderada, equivalent a l'estimació de la mediana, sense cap dels problemes del clàssic kriging lognormal. El cas composicional ofereix solucions equivalents, però a més permet estimar vectors de probabilitat multinomial. Amb una aproximació bayesiana preliminar, el kriging de composicions esdevé també una alternativa consistent al kriging indicador. Aquesta tècnica s'empra per estimar funcions de probabilitat de variables qualsevol, malgrat que sovint ofereix estimacions negatives, cosa que s'evita amb l'alternativa proposada. La utilitat d'aquest conjunt de tècniques es comprova estudiant la contaminació per amoníac a una estació de control automàtic de la qualitat de l'aigua de la conca de la Tordera, i es conclou que només fent servir les tècniques proposades hom pot detectar en quins instants l'amoni es transforma en amoníac en una concentració superior a la legalment permesa.
Resumo:
Eye tracking has become a preponderant technique in the evaluation of user interaction and behaviour with study objects in defined contexts. Common eye tracking related data representation techniques offer valuable input regarding user interaction and eye gaze behaviour, namely through fixations and saccades measurement. However, these and other techniques may be insufficient for the representation of acquired data in specific studies, namely because of the complexity of the study object being analysed. This paper intends to contribute with a summary of data representation and information visualization techniques used in data analysis within different contexts (advertising, websites, television news and video games). Additionally, several methodological approaches are presented in this paper, which resulted from several studies developed and under development at CETAC.MEDIA - Communication Sciences and Technologies Research Centre. In the studies described, traditional data representation techniques were insufficient. As a result, new approaches were necessary and therefore, new forms of representing data, based on common techniques were developed with the objective of improving communication and information strategies. In each of these studies, a brief summary of the contribution to their respective area will be presented, as well as the data representation techniques used and some of the acquired results.
Resumo:
Context-aware multimodal interactive systems aim to adapt to the needs and behavioural patterns of users and offer a way forward for enhancing the efficacy and quality of experience (QoE) in human-computer interaction. The various modalities that constribute to such systems each provide a specific uni-modal response that is integratively presented as a multi-modal interface capable of interpretation of multi-modal user input and appropriately responding to it through dynamically adapted multi-modal interactive flow management , This paper presents an initial background study in the context of the first phase of a PhD research programme in the area of optimisation of data fusion techniques to serve multimodal interactivite systems, their applications and requirements.
Resumo:
We describe a new methodology for comparing satellite radiation budget data with a numerical weather prediction (NWP) model. This is applied to data from the Geostationary Earth Radiation Budget (GERB) instrument on Meteosat-8. The methodology brings together, in near-real time, GERB broadband shortwave and longwave fluxes with simulations based on analyses produced by the Met Office global NWP model. Results for the period May 2003 to February 2005 illustrate the progressive improvements in the data products as various initial problems were resolved. In most areas the comparisons reveal systematic errors in the model's representation of surface properties and clouds, which are discussed elsewhere. However, for clear-sky regions over the oceans the model simulations are believed to be sufficiently accurate to allow the quality of the GERB fluxes themselves to be assessed and any changes in time of the performance of the instrument to be identified. Using model and radiosonde profiles of temperature and humidity as input to a single-column version of the model's radiation code, we conduct sensitivity experiments which provide estimates of the expected model errors over the ocean of about ±5–10 W m−2 in clear-sky outgoing longwave radiation (OLR) and ±0.01 in clear-sky albedo. For the more recent data the differences between the observed and modeled OLR and albedo are well within these error estimates. The close agreement between the observed and modeled values, particularly for the most recent period, illustrates the value of the methodology. It also contributes to the validation of the GERB products and increases confidence in the quality of the data, prior to their release.
Resumo:
GODIVA2 is a dynamic website that provides visual access to several terabytes of physically distributed, four-dimensional environmental data. It allows users to explore large datasets interactively without the need to install new software or download and understand complex data. Through the use of open international standards, GODIVA2 maintains a high level of interoperability with third-party systems, allowing diverse datasets to be mutually compared. Scientists can use the system to search for features in large datasets and to diagnose the output from numerical simulations and data processing algorithms. Data providers around Europe have adopted GODIVA2 as an INSPIRE-compliant dynamic quick-view system for providing visual access to their data.
Resumo:
The common GIS-based approach to regional analyses of soil organic carbon (SOC) stocks and changes is to define geographic layers for which unique sets of driving variables are derived, which include land use, climate, and soils. These GIS layers, with their associated attribute data, can then be fed into a range of empirical and dynamic models. Common methodologies for collating and formatting regional data sets on land use, climate, and soils were adopted for the project Assessment of Soil Organic Carbon Stocks and Changes at National Scale (GEFSOC). This permitted the development of a uniform protocol for handling the various input for the dynamic GEFSOC Modelling System. Consistent soil data sets for Amazon-Brazil, the Indo-Gangetic Plains (IGP) of India, Jordan and Kenya, the case study areas considered in the GEFSOC project, were prepared using methodologies developed for the World Soils and Terrain Database (SOTER). The approach involved three main stages: (1) compiling new soil geographic and attribute data in SOTER format; (2) using expert estimates and common sense to fill selected gaps in the measured or primary data; (3) using a scheme of taxonomy-based pedotransfer rules and expert-rules to derive soil parameter estimates for similar soil units with missing soil analytical data. The most appropriate approach varied from country to country, depending largely on the overall accessibility and quality of the primary soil data available in the case study areas. The secondary SOTER data sets discussed here are appropriate for a wide range of environmental applications at national scale. These include agro-ecological zoning, land evaluation, modelling of soil C stocks and changes, and studies of soil vulnerability to pollution. Estimates of national-scale stocks of SOC, calculated using SOTER methods, are presented as a first example of database application. Independent estimates of SOC stocks are needed to evaluate the outcome of the GEFSOC Modelling System for current conditions of land use and climate. (C) 2007 Elsevier B.V. All rights reserved.
Modelled soil organic carbon stocks and changes in the Indo-Gangetic Plains, India from 1980 to 2030
Resumo:
The Global Environment Facility co-financed Soil Organic Carbon (GEFSOC) Project developed a comprehensive modelling system for predicting soil organic carbon (SOC) stocks and changes over time. This research is an effort to predict SOC stocks and changes for the Indian, Indo-Gangetic Plains (IGP), an area with a predominantly rice (Oryza sativa) - wheat (Triticum aestivum) cropping system, using the GEFSOC Modelling System and to compare output with stocks generated using mapping approaches based on soil survey data. The GEFSOC Modelling System predicts an estimated SOC stock for the IGP, India of 1.27, 1.32 and 1.27 Pg for 1990, 2000 and 2030, respectively, in the top 20 cm of soil. The SOC stock using a mapping approach based on soil survey data was 0.66 and 0.88 Pg for 1980 and 2000, respectively. The SOC stock estimated using the GEFSOC Modelling System is higher than the stock estimated using the mapping approach. This is due to the fact that while the GEFSOC System accounts for variation in crop input data (crop management), the soil mapping approach only considers regional variation in soil texture and wetness. The trend of overall change in the modelled SOC stock estimates shows that the IGP, India may have reached an equilibrium following 30-40 years of the Green Revolution. This can be seen in the SOC stock change rates. Various different estimation methods show SOC stocks of 0.57-1.44 Pg C for the study area. The trend of overall change in C stock assessed from the soil survey data indicates that the soils of the IGP, India may store a projected 1.1 Pg of C in 2030. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The Mersey Basin has been significantly polluted for over 200 years. However, there is a lack of quantitative historical water quality data as effective water quality monitoring and data recording only began 30-40 years ago. This paper assesses water pollution in the Mersey Basin using a Water Pollution Index constructed from social and economic data. Methodology, output and the difficulties involved with validation are discussed. With the limited data input available the index approximately reproduces historical water quality. The paper illustrates how historical studies of environmental water quality may provide valuable identification of factors responsible for pollution and a marker set for contemporary and future water quality issues in the context of the past. This is an issue of growing research interest.
Resumo:
A wide variety of exposure models are currently employed for health risk assessments. Individual models have been developed to meet the chemical exposure assessment needs of Government, industry and academia. These existing exposure models can be broadly categorised according to the following types of exposure source: environmental, dietary, consumer product, occupational, and aggregate and cumulative. Aggregate exposure models consider multiple exposure pathways, while cumulative models consider multiple chemicals. In this paper each of these basic types of exposure model are briefly described, along with any inherent strengths or weaknesses, with the UK as a case study. Examples are given of specific exposure models that are currently used, or that have the potential for future use, and key differences in modelling approaches adopted are discussed. The use of exposure models is currently fragmentary in nature. Specific organisations with exposure assessment responsibilities tend to use a limited range of models. The modelling techniques adopted in current exposure models have evolved along distinct lines for the various types of source. In fact different organisations may be using different models for very similar exposure assessment situations. This lack of consistency between exposure modelling practices can make understanding the exposure assessment process more complex, can lead to inconsistency between organisations in how critical modelling issues are addressed (e.g. variability and uncertainty), and has the potential to communicate mixed messages to the general public. Further work should be conducted to integrate the various approaches and models, where possible and regulatory remits allow, to get a coherent and consistent exposure modelling process. We recommend the development of an overall framework for exposure and risk assessment with common approaches and methodology, a screening tool for exposure assessment, collection of better input data, probabilistic modelling, validation of model input and output and a closer working relationship between scientists and policy makers and staff from different Government departments. A much increased effort is required is required in the UK to address these issues. The result will be a more robust, transparent, valid and more comparable exposure and risk assessment process. (C) 2006 Elsevier Ltd. All rights reserved.
Resumo:
An investigation using the Stepping Out model of early hominin dispersal out of Africa is presented here. The late arrival of early hominins into Europe, as deduced from the fossil record, is shown to be consistent with poor ability of these hominins to survive in the Eurasian landscape. The present study also extends the understanding of modelling results from the original study by Mithen and Reed (2002. Stepping out: a computer simulation of hominid dispersal from Africa. J. Hum. Evol. 43, 433-462). The representation of climate and vegetation patterns has been improved through the use of climate model output. This study demonstrates that interpretative confidence may be strengthened, and new insights gained when climate models and hominin dispersal models are integrated. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Technology involving genetic modification of crops has the potential to make a contribution to rural poverty reduction in many developing countries. Thus far, pesticide-producing Bacillus thuringensis (Bt) varieties of cotton have been the main GM crops under cultivation in developing nations. Several studies have evaluated the farm-level performance of Bt varieties in comparison to conventional ones by estimating production technology, and have mostly found Bt technology to be very successful in raising output and/or reducing pesticide input. However, the production risk properties of this technology have not been studied, although they are likely to be important to risk-averse smallholders. This study investigates the output risk aspects of Bt technology by estimating two 'flexible risk' production function models allowing technology to independently affect the mean and higher moments of output. The first is the popular Just-Pope model and the second is a more general 'damage control' flexible risk model. The models are applied to cross-sectional data on South African smallholders, some of whom used Bt varieties. The results show no evidence that a 'risk-reduction' claim can be made for Bt technology. Indeed, there is some evidence to support the notion that the technology increases output risk, implying that simple (expected) profit computations used in past evaluations may overstate true benefits.
Resumo:
A regional overview of the water quality and ecology of the River Lee catchment is presented. Specifically, data describing the chemical, microbiological and macrobiological water quality and fisheries communities have been analysed, based on a division into river, sewage treatment works, fish-farm, lake and industrial samples. Nutrient enrichment and the highest concentrations of metals and micro-organics were found in the urbanised, lower reaches of the Lee and in the Lee Navigation. Average annual concentrations of metals were generally within environmental quality standards although, oil many occasions, concentrations of cadmium, copper, lead, mercury and zinc were in excess of the standards. Various organic substances (used as herbicides, fungicides, insecticides, chlorination by-products and industrial solvents) were widely detected in the Lee system. Concentrations of ten micro-organic substances were observed in excess of their environmental quality standards, though not in terms of annual averages. Sewage treatment works were the principal point source input of nutrients. metals and micro-organic determinands to the catchment. Diffuse nitrogen sources contributed approximately 60% and 27% of the in-stream load in the upper and lower Lee respectively, whereas approximately 60% and 20% of the in-stream phosphorus load was derived from diffuse sources in the upper and lower Lee. For metals, the most significant source was the urban runoff from North London. In reaches less affected by effluent discharges, diffuse runoff from urban and agricultural areas dominated trends. Flig-h microbiological content, observed in the River Lee particularly in urbanised reaches, was far in excess of the EC Bathing Water Directive standards. Water quality issues and degraded habitat in the lower reaches of the Lee have led to impoverished aquatic fauna but, within the mid-catchment reaches and upper agricultural tributaries, less nutrient enrichment and channel alteration has permitted more diverse aquatic fauna.
Progress on “Changing coastlines: data assimilation for morphodynamic prediction and predictability”
Resumo:
The task of assessing the likelihood and extent of coastal flooding is hampered by the lack of detailed information on near-shore bathymetry. This is required as an input for coastal inundation models, and in some cases the variability in the bathymetry can impact the prediction of those areas likely to be affected by flooding in a storm. The constant monitoring and data collection that would be required to characterise the near-shore bathymetry over large coastal areas is impractical, leaving the option of running morphodynamic models to predict the likely bathymetry at any given time. However, if the models are inaccurate the errors may be significant if incorrect bathymetry is used to predict possible flood risks. This project is assessing the use of data assimilation techniques to improve the predictions from a simple model, by rigorously incorporating observations of the bathymetry into the model, to bring the model closer to the actual situation. Currently we are concentrating on Morecambe Bay as a primary study site, as it has a highly dynamic inter-tidal zone, with changes in the course of channels in this zone impacting the likely locations of flooding from storms. We are working with SAR images, LiDAR, and swath bathymetry to give us the observations over a 2.5 year period running from May 2003 – November 2005. We have a LiDAR image of the entire inter-tidal zone for November 2005 to use as validation data. We have implemented a 3D-Var data assimilation scheme, to investigate the improvements in performance of the data assimilation compared to the previous scheme which was based on the optimal interpolation method. We are currently evaluating these different data assimilation techniques, using 22 SAR data observations. We will also include the LiDAR data and swath bathymetry to improve the observational coverage, and investigate the impact of different types of observation on the predictive ability of the model. We are also assessing the ability of the data assimilation scheme to recover the correct bathymetry after storm events, which can dramatically change the bathymetry in a short period of time.