27 resultados para Applications in Economics and Epidemiology
em Universidad Politécnica de Madrid
Resumo:
Nowadays, Computational Fluid Dynamics (CFD) solvers are widely used within the industry to model fluid flow phenomenons. Several fluid flow model equations have been employed in the last decades to simulate and predict forces acting, for example, on different aircraft configurations. Computational time and accuracy are strongly dependent on the fluid flow model equation and the spatial dimension of the problem considered. While simple models based on perfect flows, like panel methods or potential flow models can be very fast to solve, they usually suffer from a poor accuracy in order to simulate real flows (transonic, viscous). On the other hand, more complex models such as the full Navier- Stokes equations provide high fidelity predictions but at a much higher computational cost. Thus, a good compromise between accuracy and computational time has to be fixed for engineering applications. A discretisation technique widely used within the industry is the so-called Finite Volume approach on unstructured meshes. This technique spatially discretises the flow motion equations onto a set of elements which form a mesh, a discrete representation of the continuous domain. Using this approach, for a given flow model equation, the accuracy and computational time mainly depend on the distribution of nodes forming the mesh. Therefore, a good compromise between accuracy and computational time might be obtained by carefully defining the mesh. However, defining an optimal mesh for complex flows and geometries requires a very high level expertize in fluid mechanics and numerical analysis, and in most cases a simple guess of regions of the computational domain which might affect the most the accuracy is impossible. Thus, it is desirable to have an automatized remeshing tool, which is more flexible with unstructured meshes than its structured counterpart. However, adaptive methods currently in use still have an opened question: how to efficiently drive the adaptation ? Pioneering sensors based on flow features generally suffer from a lack of reliability, so in the last decade more effort has been made in developing numerical error-based sensors, like for instance the adjoint-based adaptation sensors. While very efficient at adapting meshes for a given functional output, the latter method is very expensive as it requires to solve a dual set of equations and computes the sensor on an embedded mesh. Therefore, it would be desirable to develop a more affordable numerical error estimation method. The current work aims at estimating the truncation error, which arises when discretising a partial differential equation. These are the higher order terms neglected in the construction of the numerical scheme. The truncation error provides very useful information as it is strongly related to the flow model equation and its discretisation. On one hand, it is a very reliable measure of the quality of the mesh, therefore very useful in order to drive a mesh adaptation procedure. On the other hand, it is strongly linked to the flow model equation, so that a careful estimation actually gives information on how well a given equation is solved, which may be useful in the context of _ -extrapolation or zonal modelling. The following work is organized as follows: Chap. 1 contains a short review of mesh adaptation techniques as well as numerical error prediction. In the first section, Sec. 1.1, the basic refinement strategies are reviewed and the main contribution to structured and unstructured mesh adaptation are presented. Sec. 1.2 introduces the definitions of errors encountered when solving Computational Fluid Dynamics problems and reviews the most common approaches to predict them. Chap. 2 is devoted to the mathematical formulation of truncation error estimation in the context of finite volume methodology, as well as a complete verification procedure. Several features are studied, such as the influence of grid non-uniformities, non-linearity, boundary conditions and non-converged numerical solutions. This verification part has been submitted and accepted for publication in the Journal of Computational Physics. Chap. 3 presents a mesh adaptation algorithm based on truncation error estimates and compares the results to a feature-based and an adjoint-based sensor (in collaboration with Jorge Ponsín, INTA). Two- and three-dimensional cases relevant for validation in the aeronautical industry are considered. This part has been submitted and accepted in the AIAA Journal. An extension to Reynolds Averaged Navier- Stokes equations is also included, where _ -estimation-based mesh adaptation and _ -extrapolation are applied to viscous wing profiles. The latter has been submitted in the Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering. Keywords: mesh adaptation, numerical error prediction, finite volume Hoy en día, la Dinámica de Fluidos Computacional (CFD) es ampliamente utilizada dentro de la industria para obtener información sobre fenómenos fluidos. La Dinámica de Fluidos Computacional considera distintas modelizaciones de las ecuaciones fluidas (Potencial, Euler, Navier-Stokes, etc) para simular y predecir las fuerzas que actúan, por ejemplo, sobre una configuración de aeronave. El tiempo de cálculo y la precisión en la solución depende en gran medida de los modelos utilizados, así como de la dimensión espacial del problema considerado. Mientras que modelos simples basados en flujos perfectos, como modelos de flujos potenciales, se pueden resolver rápidamente, por lo general aducen de una baja precisión a la hora de simular flujos reales (viscosos, transónicos, etc). Por otro lado, modelos más complejos tales como el conjunto de ecuaciones de Navier-Stokes proporcionan predicciones de alta fidelidad, a expensas de un coste computacional mucho más elevado. Por lo tanto, en términos de aplicaciones de ingeniería se debe fijar un buen compromiso entre precisión y tiempo de cálculo. Una técnica de discretización ampliamente utilizada en la industria es el método de los Volúmenes Finitos en mallas no estructuradas. Esta técnica discretiza espacialmente las ecuaciones del movimiento del flujo sobre un conjunto de elementos que forman una malla, una representación discreta del dominio continuo. Utilizando este enfoque, para una ecuación de flujo dado, la precisión y el tiempo computacional dependen principalmente de la distribución de los nodos que forman la malla. Por consiguiente, un buen compromiso entre precisión y tiempo de cálculo se podría obtener definiendo cuidadosamente la malla, concentrando sus elementos en aquellas zonas donde sea estrictamente necesario. Sin embargo, la definición de una malla óptima para corrientes y geometrías complejas requiere un nivel muy alto de experiencia en la mecánica de fluidos y el análisis numérico, así como un conocimiento previo de la solución. Aspecto que en la mayoría de los casos no está disponible. Por tanto, es deseable tener una herramienta que permita adaptar los elementos de malla de forma automática, acorde a la solución fluida (remallado). Esta herramienta es generalmente más flexible en mallas no estructuradas que con su homóloga estructurada. No obstante, los métodos de adaptación actualmente en uso todavía dejan una pregunta abierta: cómo conducir de manera eficiente la adaptación. Sensores pioneros basados en las características del flujo en general, adolecen de una falta de fiabilidad, por lo que en la última década se han realizado grandes esfuerzos en el desarrollo numérico de sensores basados en el error, como por ejemplo los sensores basados en el adjunto. A pesar de ser muy eficientes en la adaptación de mallas para un determinado funcional, este último método resulta muy costoso, pues requiere resolver un doble conjunto de ecuaciones: la solución y su adjunta. Por tanto, es deseable desarrollar un método numérico de estimación de error más asequible. El presente trabajo tiene como objetivo estimar el error local de truncación, que aparece cuando se discretiza una ecuación en derivadas parciales. Estos son los términos de orden superior olvidados en la construcción del esquema numérico. El error de truncación proporciona una información muy útil sobre la solución: es una medida muy fiable de la calidad de la malla, obteniendo información que permite llevar a cabo un procedimiento de adaptación de malla. Está fuertemente relacionado al modelo matemático fluido, de modo que una estimación precisa garantiza la idoneidad de dicho modelo en un campo fluido, lo que puede ser útil en el contexto de modelado zonal. Por último, permite mejorar la precisión de la solución resolviendo un nuevo sistema donde el error local actúa como término fuente (_ -extrapolación). El presenta trabajo se organiza de la siguiente manera: Cap. 1 contiene una breve reseña de las técnicas de adaptación de malla, así como de los métodos de predicción de los errores numéricos. En la primera sección, Sec. 1.1, se examinan las estrategias básicas de refinamiento y se presenta la principal contribución a la adaptación de malla estructurada y no estructurada. Sec 1.2 introduce las definiciones de los errores encontrados en la resolución de problemas de Dinámica Computacional de Fluidos y se examinan los enfoques más comunes para predecirlos. Cap. 2 está dedicado a la formulación matemática de la estimación del error de truncación en el contexto de la metodología de Volúmenes Finitos, así como a un procedimiento de verificación completo. Se estudian varias características que influyen en su estimación: la influencia de la falta de uniformidad de la malla, el efecto de las no linealidades del modelo matemático, diferentes condiciones de contorno y soluciones numéricas no convergidas. Esta parte de verificación ha sido presentada y aceptada para su publicación en el Journal of Computational Physics. Cap. 3 presenta un algoritmo de adaptación de malla basado en la estimación del error de truncación y compara los resultados con sensores de featured-based y adjointbased (en colaboración con Jorge Ponsín del INTA). Se consideran casos en dos y tres dimensiones, relevantes para la validación en la industria aeronáutica. Este trabajo ha sido presentado y aceptado en el AIAA Journal. También se incluye una extensión de estos métodos a las ecuaciones RANS (Reynolds Average Navier- Stokes), en donde adaptación de malla basada en _ y _ -extrapolación son aplicados a perfiles con viscosidad de alas. Este último trabajo se ha presentado en los Actas de la Institución de Ingenieros Mecánicos, Parte G: Journal of Aerospace Engineering. Palabras clave: adaptación de malla, predicción del error numérico, volúmenes finitos
Resumo:
Patent and trademark offices which run according to principles of new management have an inherent need for dependable forecasting data in planning capacity and service levels. The ability of the Spanish Office of Patents and Trademarks to carry out efficient planning of its resource needs requires the use of methods which allow it to predict the changes in the number of patent and trademark applications at different time horizons. The approach for the prediction of time series of Spanish patents and trademarks applications (1979e2009) was based on the use of different techniques of time series prediction in a short-term horizon. The methods used can be grouped into two specifics areas: regression models of trends and time series models. The results of this study show that it is possible to model the series of patents and trademarks applications with different models, especially ARIMA, with satisfactory model adjustment and relatively low error.
Resumo:
Laser material processing is being extensively used in photovoltaic applications for both the fabrication of thin film modules and the enhancement of the crystalline silicon solar cells. The two temperature model for thermal diffusion was numerically solved in this paper. Laser pulses of 1064, 532 or 248 nm with duration of 35, 26 or 10 ns were considered as the thermal source leading to the material ablation. Considering high irradiance levels (108–109 W cm−2), a total absorption of the energy during the ablation process was assumed in the model. The materials analysed in the simulation were aluminium (Al) and silver (Ag), which are commonly used as metallic electrodes in photovoltaic devices. Moreover, thermal diffusion was also simulated for crystalline silicon (c-Si). A similar trend of temperature as a function of depth and time was found for both metals and c-Si regardless of the employed wavelength. For each material, the ablation depth dependence on laser pulse parameters was determined by means of an ablation criterion. Thus, after the laser pulse, the maximum depth for which the total energy stored in the material is equal to the vaporisation enthalpy was considered as the ablation depth. For all cases, the ablation depth increased with the laser pulse fluence and did not exhibit a clear correlation with the radiation wavelength. Finally, the experimental validation of the simulation results was carried out and the ability of the model with the initial hypothesis of total energy absorption to closely fit experimental results was confirmed.
Resumo:
P2P applications are increasingly present on the web. We have identified a gap in current proposals when it comes to the use of traditional P2P overlays for real-time multimedia streaming. We analyze the possibilities and challenges to extend WebRTC in order to implement JavaScript APIs for P2P streaming algorithms.
Resumo:
Reducing energy consumption is one of the main goals of sustainability planning in most countries. For instance in Europe, the EC established the objectives in the Communication “20 20 by 2020 Europe's climate change opportunity”.
Resumo:
Augmented reality (AR) is been increasingly used in mobile devices. Most of the available applications are set to work outdoors, mainly due to the availability of a reliable positioning system. Nevertheless, indoor (smart) spaces offer a lot of opportunities of creating new service concepts. In particular, in this paper we explore the applicability of mobile AR to hospitality environments (hotels and similar establishments). From the state-of-the-art of technologies and applications, a portfolio of services has been identified and a prototype using off-the-shelf technologies has been designed. Our objective is to identify the next technological challenges to overcome in order to have suitable underlying infrastructures and innovative services which enhance the traveller?s experience.
Resumo:
The image by Computed Tomography is a non-invasive alternative for observing soil structures, mainly pore space. The pore space correspond in soil data to empty or free space in the sense that no material is present there but only fluids, the fluid transport depend of pore spaces in soil, for this reason is important identify the regions that correspond to pore zones. In this paper we present a methodology in order to detect pore space and solid soil based on the synergy of the image processing, pattern recognition and artificial intelligence. The mathematical morphology is an image processing technique used for the purpose of image enhancement. In order to find pixels groups with a similar gray level intensity, or more or less homogeneous groups, a novel image sub-segmentation based on a Possibilistic Fuzzy c-Means (PFCM) clustering algorithm was used. The Artificial Neural Networks (ANNs) are very efficient for demanding large scale and generic pattern recognition applications for this reason finally a classifier based on artificial neural network is applied in order to classify soil images in two classes, pore space and solid soil respectively.
Resumo:
In Brazil, a low-latitude country characterized by its high availability and uniformity of solar radiation, the use of PV solar energy integrated in buildings is still incipient. However, at the moment there are several initiatives which give some hints that lead to think that there will be a change shortly. In countries where this technology is already a daily reality, such as Germany, Japan or Spain, the recommendations and basic criteria to avoid losses due to orientation and tilt are widespread. Extrapolating those measures used in high latitudes to all regions, without a previous deeper analysis, is standard practice. They do not always correspond to reality, what frequently leads to false assumptions and may become an obstacle in a country which is taking the first step in this area. In this paper, the solar potential yield for different surfaces in Brazilian cities (located at latitudes between 0° and 30°S) are analyzed with the aim of providing the necessary tools to evaluate the suitability of the buildings’ envelopes for photovoltaic use
Resumo:
The authors are from UPM and are relatively grouped, and all have intervened in different academic or real cases on the subject, at different times as being of different age. With precedent from E. Torroja and A. Páez in Madrid Spain Safety Probabilistic models for concrete about 1957, now in ICOSSAR conferences, author J.M. Antón involved since autumn 1967 for euro-steel construction in CECM produced a math model for independent load superposition reductions, and using it a load coefficient pattern for codes in Rome Feb. 1969, practically adopted for European constructions, giving in JCSS Lisbon Feb. 1974 suggestion of union for concrete-steel-al.. That model uses model for loads like Gumbel type I, for 50 years for one type of load, reduced to 1 year to be added to other independent loads, the sum set in Gumbel theories to 50 years return period, there are parallel models. A complete reliability system was produced, including non linear effects as from buckling, phenomena considered somehow in actual Construction Eurocodes produced from Model Codes. The system was considered by author in CEB in presence of Hydraulic effects from rivers, floods, sea, in reference with actual practice. When redacting a Road Drainage Norm in MOPU Spain an optimization model was realized by authors giving a way to determine the figure of Return Period, 10 to 50 years, for the cases of hydraulic flows to be considered in road drainage. Satisfactory examples were a stream in SE of Spain with Gumbel Type I model and a paper of Ven Te Chow with Mississippi in Keokuk using Gumbel type II, and the model can be modernized with more varied extreme laws. In fact in the MOPU drainage norm the redacting commission acted also as expert to set a table of return periods for elements of road drainage, in fact as a multi-criteria complex decision system. These precedent ideas were used e.g. in wide Codes, indicated in symposia or meetings, but not published in journals in English, and a condensate of contributions of authors is presented. The authors are somehow involved in optimization for hydraulic and agro planning, and give modest hints of intended applications in presence of agro and environment planning as a selection of the criteria and utility functions involved in bayesian, multi-criteria or mixed decision systems. Modest consideration is made of changing in climate, and on the production and commercial systems, and on others as social and financial.
Resumo:
Pragmatism is the leading motivation of regularization. We can understand regularization as a modification of the maximum-likelihood estimator so that a reasonable answer could be given in an unstable or ill-posed situation. To mention some typical examples, this happens when fitting parametric or non-parametric models with more parameters than data or when estimating large covariance matrices. Regularization is usually used, in addition, to improve the bias-variance tradeoff of an estimation. Then, the definition of regularization is quite general, and, although the introduction of a penalty is probably the most popular type, it is just one out of multiple forms of regularization. In this dissertation, we focus on the applications of regularization for obtaining sparse or parsimonious representations, where only a subset of the inputs is used. A particular form of regularization, L1-regularization, plays a key role for reaching sparsity. Most of the contributions presented here revolve around L1-regularization, although other forms of regularization are explored (also pursuing sparsity in some sense). In addition to present a compact review of L1-regularization and its applications in statistical and machine learning, we devise methodology for regression, supervised classification and structure induction of graphical models. Within the regression paradigm, we focus on kernel smoothing learning, proposing techniques for kernel design that are suitable for high dimensional settings and sparse regression functions. We also present an application of regularized regression techniques for modeling the response of biological neurons. Supervised classification advances deal, on the one hand, with the application of regularization for obtaining a na¨ıve Bayes classifier and, on the other hand, with a novel algorithm for brain-computer interface design that uses group regularization in an efficient manner. Finally, we present a heuristic for inducing structures of Gaussian Bayesian networks using L1-regularization as a filter. El pragmatismo es la principal motivación de la regularización. Podemos entender la regularización como una modificación del estimador de máxima verosimilitud, de tal manera que se pueda dar una respuesta cuando la configuración del problema es inestable. A modo de ejemplo, podemos mencionar el ajuste de modelos paramétricos o no paramétricos cuando hay más parámetros que casos en el conjunto de datos, o la estimación de grandes matrices de covarianzas. Se suele recurrir a la regularización, además, para mejorar el compromiso sesgo-varianza en una estimación. Por tanto, la definición de regularización es muy general y, aunque la introducción de una función de penalización es probablemente el método más popular, éste es sólo uno de entre varias posibilidades. En esta tesis se ha trabajado en aplicaciones de regularización para obtener representaciones dispersas, donde sólo se usa un subconjunto de las entradas. En particular, la regularización L1 juega un papel clave en la búsqueda de dicha dispersión. La mayor parte de las contribuciones presentadas en la tesis giran alrededor de la regularización L1, aunque también se exploran otras formas de regularización (que igualmente persiguen un modelo disperso). Además de presentar una revisión de la regularización L1 y sus aplicaciones en estadística y aprendizaje de máquina, se ha desarrollado metodología para regresión, clasificación supervisada y aprendizaje de estructura en modelos gráficos. Dentro de la regresión, se ha trabajado principalmente en métodos de regresión local, proponiendo técnicas de diseño del kernel que sean adecuadas a configuraciones de alta dimensionalidad y funciones de regresión dispersas. También se presenta una aplicación de las técnicas de regresión regularizada para modelar la respuesta de neuronas reales. Los avances en clasificación supervisada tratan, por una parte, con el uso de regularización para obtener un clasificador naive Bayes y, por otra parte, con el desarrollo de un algoritmo que usa regularización por grupos de una manera eficiente y que se ha aplicado al diseño de interfaces cerebromáquina. Finalmente, se presenta una heurística para inducir la estructura de redes Bayesianas Gaussianas usando regularización L1 a modo de filtro.
Resumo:
El Zn es un elemento esencial para el crecimiento saludable y reproducción de plantas, animales y humanos. La deficiencia de Zn es una de las carencias de micronutrientes más extendidas en muchos cultivos, afectando a grandes extensiones de suelos en diferentes áreas agrícolas. La biofortificación agronómica de diferentes cultivos, incrementando la concentración de micronutriente Zn en la planta, es un medio para evitar la deficiencia de Zn en animales y humanos. Tradicionalmente se han utilizado fertilizantes de Zn inorgánicos, como el ZnSO4, aunque en los últimos años se están utilizado complejos de Zn como fuentes de este micronutriente, obteniéndose altas concentraciones de Zn soluble y disponible en el suelo. Sin embargo, el envejecimiento de la fuente en el suelo puede causar cambios importantes en su disponibilidad para las plantas. Cuando se añaden al suelo fuentes de Zn inorgánicas, las formas de Zn más solubles pierden actividad y extractabilidad con el paso del tiempo, transformándose a formas más estables y menos biodisponibles. En esta tesis se estudia el efecto residual de diferentes complejos de Zn de origen natural y sintético, aplicados en cultivos previos de judía y lino, bajo dos condiciones de riego distintas (por encima y por debajo de la capacidad de campo, respectivamente) y en dos suelos diferentes (ácido y calizo). Los fertilizantes fueron aplicados al cultivo previo en tres dosis diferentes (0, 5 y 10 mg Zn kg-1 suelo). El Zn fácilmente lixiviable se estimó con la extracción con BaCl2 0,1M. Bajo condiciones de humedad por encima de la capacidad de campo se obtuvieron mayores porcentajes de Zn lixiviado en el suelo calizo que en el suelo ácido. En el caso del cultivo de judía realizado en condiciones de humedad por encima de la capacidad de campo se compararon las cantidades extraídas con el Zn lixiviado real. El análisis de correlación entre el Zn fácilmente lixiviable y el estimado sólo fue válido para complejos con alta movilidad y para cada suelo por separado. Bajo condiciones de humedad por debajo de la capacidad de campo, la concentración de Zn biodisponible fácilmente lixiviable presentó correlaciones positivas y altamente significativas con la concentración de Zn disponible en el suelo. El Zn disponible se estimó con varios métodos de extracción empleados habitualmente: DTPA-TEA, DTPA-AB, Mehlich-3 y LMWOAs. Estas concentraciones fueron mayores en el suelo ácido que en el calizo. Los diferentes métodos utilizados para estimar el Zn disponible presentaron correlaciones positivas y altamente significativas entre sí. La distribución del Zn en las distintas fracciones del suelo fue estimada con diferentes extracciones secuenciales. Las extracciones secuenciales mostraron un descenso entre los dos cultivos (el anterior y el actual) en la fracción de Zn más lábil y un aumento en la concentración de Zn asociado a fracciones menos lábiles, como carbonatos, óxidos y materia orgánica. Se obtuvieron correlaciones positivas y altamente significativas entre las concentraciones de Zn asociado a las fracciones más lábiles (WSEX y WS+EXC, experimento de la judía y lino, respectivamente) y las concentraciones de Zn disponible, estimadas por los diferentes métodos. Con respecto a la planta se determinaron el rendimiento en materia seca y la concentración de Zn en planta. Se observó un aumento del rendimiento y concentraciones con el efecto residual de la dosis mayores (10 mg Zn kg-1) con respecto a la dosis inferior (5 mg Zn 12 kg-1) y de ésta con respecto a la dosis 0 (control). El incremento de la concentración de Zn en todos los tratamientos fertilizantes, respecto al control, fue mayor en el suelo ácido que en el calizo. Las concentraciones de Zn en planta indicaron que, en el suelo calizo, serían convenientes nuevas aplicaciones de Zn en posteriores cultivos para mantener unas adecuadas concentraciones en planta. Las mayores concentraciones de Zn en la planta de judía, cultivada bajo condiciones de humedad por encima de la capacidad de campo, se obtuvieron en el suelo ácido con el efecto residual del Zn-HEDTA a la dosis de 10 mg Zn kg-1 (280,87 mg Zn kg-1) y en el suelo calizo con el efecto residual del Zn-DTPA-HEDTA-EDTA a la dosis de 10 mg Zn kg-1 (49,89 mg Zn kg-1). En el cultivo de lino, cultivado bajo condiciones de humedad por debajo de la capacidad de campo, las mayores concentraciones de Zn en planta ese obtuvieron con el efecto residual del Zn-AML a la dosis de 10 mg Zn kg-1 (224,75 mg Zn kg-1) y en el suelo calizo con el efecto residual del Zn-EDTA a la dosis de 10 mg Zn kg-1 (99,83 mg Zn kg-1). El Zn tomado por la planta fue determinado como combinación del rendimiento y de la concentración en planta. Bajo condiciones de humedad por encima de capacidad de campo, con lixiviación, el Zn tomado por la judía disminuyó en el cultivo actual con respecto al cultivo anterior. Sin embargo, en el cultivo de lino, bajo condiciones de humedad por debajo de la capacidad de campo, se obtuvieron cantidades de Zn tomado superiores en el cultivo actual con respecto al anterior. Esta tendencia también se observó, en ambos casos, con el porcentaje de Zn usado por la planta. Summary Zinc is essential for healthy growth and reproduction of plants, animals and humans. Zinc deficiency is one of the most widespread micronutrient deficiency in different crops, and affect different agricultural areas. Agronomic biofortification of crops produced by an increased of Zn in plant, is one way to avoid Zn deficiency in animals and humans Sources with inorganic Zn, such as ZnSO4, have been used traditionally. Although, in recent years, Zn complexes are used as sources of this micronutrient, the provide high concentrations of soluble and available Zn in soil. However, the aging of the source in the soil could cause significant changes in their availability to plants. When an inorganic source of Zn is added to soil, Zn forms more soluble and extractability lose activity over time, transforming into forms more stable and less bioavailable. This study examines the residual effect of different natural and synthetic Zn complexes on navy bean and flax crops, under two different moisture conditions (above and below field capacity, respectively) and in two different soils (acid and calcareous). Fertilizers were applied to the previous crop in three different doses (0, 5 y 10 mg Zn kg-1 soil). The easily leachable Zn was estimated by extraction with 0.1 M BaCl2. Under conditions of moisture above field capacity, the percentage of leachable Zn in the calcareous soil was higher than in acid soil. In the case of navy bean experiment, performed in moisture conditions of above field capacity, amounts extracted of easily leachable Zn were compared with the real leachable Zn. Correlation analysis between the leachable Zn and the estimate was only valid for complex with high mobility and for each soil separately. Under moisture conditions below field capacity, the concentration of bioavailable easily leachable Zn showed highly significant positive correlations with the concentration of available soil Zn. The available Zn was estimated with several commonly used extraction methods: DTPA-TEA, AB-DTPA, Mehlich-3 and LMWOAs. These concentrations were higher in acidic soil than in the calcareous. The different methods used to estimate the available Zn showed highly significant positive correlations with each other. The distribution of Zn in the different fractions of soil was estimated with different sequential extractions. The sequential extractions showed a decrease between the two crops (the previous and current) at the most labile Zn fraction and an increase in the concentration of Zn associated with the less labile fractions, such as carbonates, oxides and organic matter. A positive and highly significant correlation was obtained between the concentrations of Zn associated with more labile fractions (WSEX and WS + EXC, navy bean and flax experiments, respectively) and available Zn concentrations determined by the different methods. Dry matter yield and Zn concentration in plants were determined in plant. Yield and Zn concentration in plant were higher with the residual concentrations of the higher dose applied (10 mg Zn kg-1) than with the lower dose (5 mg Zn kg-1), also these parameters showed higher values with application of this dose than with not Zn application. The increase of Zn concentration in plant with Zn treatments, respect to the control, was greater in the acid soil than in the calcareous. The Zn concentrations in plant indicated that in the calcareous soil, new applications of Zn are desirable in subsequent crops to maintain suitable concentrations in plant. 15 The highest concentrations of Zn in navy bean plant, performed under moisture conditions above the field capacity, were obtained with the residual effect of Zn-HEDTA at the dose of 10 mg Zn kg-1 (280.87 mg Zn kg-1) in the acid soil, and with the residual effect of Zn- DTPA-HEDTA-EDTA at a dose of 10 mg Zn kg-1 (49.89 mg Zn kg-1) in the calcareous soil. In the flax crop, performed under moisture conditions below field capacity, the highest Zn concentrations in plant were obtained with the residual effect of Zn-AML at the dose of 10 mg Zn kg-1 (224.75 Zn mg kg-1) and with the residual effect of Zn-EDTA at a dose of 10 mg Zn kg-1 (99.83 mg Zn kg-1) in the calcareous soil. The Zn uptake was determined as a combination of yield and Zn concentration in plant. Under moisture conditions above field capacity, with leaching, Zn uptake by navy bean decreased in the current crop, respect to the previous crop. However, in the flax crop, under moisture conditions below field capacity, Zn uptake was higher in the current crop than in the previous. This trend is also observed in both cases, with the percentage of Zn used by the plant
Resumo:
At the present time almost all map libraries on the Internet are image collections generated by the digitization of early maps. This type of graphics files provides researchers with the possibility of accessing and visualizing historical cartographic information keeping in mind that this information has a degree of quality that depends upon elements such as the accuracy of the digitization process and proprietary constraints (e.g. visualization, resolution downloading options, copyright, use constraints). In most cases, access to these map libraries is useful only as a first approach and it is not possible to use those maps for scientific work due to the sparse tools available to measure, match, analyze and/or combine those resources with different kinds of cartography. This paper presents a method to enrich virtual map rooms and provide historians and other professional with a tool that let them to make the most of libraries in the digital era.
Resumo:
Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.
Resumo:
Personalized health (p-health) systems can contribute significantly to the sustainability of healthcare systems, though their feasibility is yet to be proven. One of the problems related to their development is the lack of well-established development tools for this domain. As the p-health paradigm is focused on patient self-management, big challenges arise around the design and implementation of patient systems. This paper presents a reference platform created for the development of these applications, and shows the advantages of its adoption in a complex project dealing with cardio-vascular diseases.
Resumo:
Energy efficiency is a major design issue in the context of Wireless Sensor Networks (WSN). If data is to be sent to a far-away base station, collaborative beamforming by the sensors may help to dis- tribute the load among the nodes and reduce fast battery depletion. However, collaborative beamforming techniques are far from opti- mality and in many cases may be wasting more power than required. In this contribution we consider the issue of energy efficiency in beamforming applications. Using a convex optimization framework, we propose the design of a virtual beamformer that maximizes the network's lifetime while satisfying a pre-specified Quality of Service (QoS) requirement. A distributed consensus-based algorithm for the computation of the optimal beamformer is also provided