911 resultados para Risk Adjusted Return on Capital
Resumo:
Objectives. The chief goal of this study was to analyze copy number variation (CNV) in breast cancer tumors from 25 African American women with early stage breast cancer (BC) using molecular inversion probes (MIP) in order to: (1) compare the degree of CNV in tumors compared to normal lymph nodes, and (2) determine whether gains and/or losses of genes in specific chromosomes differ between pathologic subtypes of breast cancer defined by known prognostic markers, (3) determine whether gains/losses in CN are associated with known oncogenes or tumor suppressor genes, and (4) determine whether increased gains/losses in CN for specific chromosomes were associated with differences in breast cancer recurrence. ^ Methods. Twenty to 37 nanograms of DNA extracted from 25 formalin-fixed paraffin embedded (FFPE) tumor samples and matched normal lymph nodes were added to individual tubes. Oligonucleotide probes with recognition sequences at each terminus were hybridized with a genomic target sequence to form a circular structure. Probes are released from genomic DNA obtained from FFPE samples, and those which have been correctly "circularized" in the proper allele/nucleotide reaction combination are amplified using polymerase chain reaction (PCR) primers. Amplicons were fluorescently labeled and the tag sequences released from the genome homology regions by treatment with uracil-N-glycosylase to cleave the probe at the site where uracils are present, and detected using a complementary tag array developed by Affymetrix. ^ Results. Analysis of CN gains and losses from tumors and normal tissues showed marked differences in tumors with numerous chromosomes affected. Similar changes were not observed in normal lymph nodes. When tumors were stratified into four groups based on expression or lack of expression of the estrogen receptor and HER2/neu, distinct patterns of CNV for different chromosomes were observed. Gains or losses in CN for specific chromosomes correlated with amplifications/deletions of particular oncogenes or tumor suppressor genes (i.e. such as found on chromosome 17) known to be associated with aggressive tumor phenotype and poor prognosis. There was a trend for increases in CN observed for chromosome 17 to correlate inversely with time to recurrence of BC (p=0.14 for trend). CNV was also observed for chromosomes 5, 8, 10, 11, and 16, which are known sites for several breast cancer susceptibility alleles. ^ Conclusions. This study is the first to validate the MIP technique, to correlate differences in gene expression with known prognostic tumor markers, and to correlate significant increases/decreases in CN with known tumor markers associated with prognosis. The results of this study may have far reaching public health implications towards identifying new high-risk groups based on genomic differences in CNP, both with respect to prognosis and response to therapy, and to eventually identify new therapeutic targets for prevention and treatment of this disease. ^
Resumo:
Workplace wellness programs have revealed immense beneficial results for both the employer and employee. Examples of results include decrease in absenteeism, turnover rate, medical claims and increases in employee satisfaction, productivity, and return on investment. However, the approach taken when implementing requires greater attention since such programs and the financial and/or non-financial incentives chosen have shown to significantly impact employee participation thus the amount of savings the organization experiences. A systematic review was conducted to evaluate the overall effectiveness of workplace wellness programs on employee health status and lifestyle change, recognize the majority types of returns observed by such programs, and identify whether financial or non-financial incentives created a greater effect on the employee. Overall employee health status improvement occurred when participating in wellness programs. The dominant indirect benefit for the organization was employee weight loss leading to a decrease in absenteeism and direct benefits included decreases in medical claims and increases in return on investment. In general, factors such as rate of participation and health status changes were most influenced when a financial incentives was provided in the wellness program. The basis of providing a program with effective incentives resides from efforts made by the employer and their efforts to play a role on every level of the organization regarding planning, implementing, and strategizing the most optimal approach for creating changes for the employees' wellbeing and productivity, thus the organizations overall returns.^
Resumo:
The purpose of this study was to evaluate the effectiveness of an HIV-screening program at a private health-care institution where the providers were trained to counsel pregnant women about the HIV-antibody test according to the latest recommendations made by the U.S. Public Health Service (PHS) and the Texas legislature. A before-and-after study design was selected for the study. The participants were OB/GYN nurses who attended an educational program and the patients they counseled about the HIV test. Training improved the nurses' overall knowledge about the content of the program and nurses were more likely to offer the HIV test to all pregnant women regardless of their risk of infection. Still, contrary to what was predicted, the nurses did not give more information to increase the knowledge pregnant women had about HIV infection, transmission, and available treatments. Consequently, many women were not given the chance to correctly assess their risk during the counseling session and there was no evidence that knowledge would reduce the propensity of many women to deny being at risk for HIV. On the other hand, pregnant women who received prenatal care after the implementation of the HIV-screening program were more likely to be tested than women who received prenatal care before its implementation (96% vs. 48%); in turn, the likelihood that more high-risk women would be tested for HIV also increased (94% vs. 60%). There was no evidence that mandatory testing with right of refusal would deter women from being tested for HIV. When the moment comes for a woman to make her decision, other concerns are more important to her than whether the option to be tested is mandatory or not. The majority of pregnant women indicated that their main reasons for being tested were: (a) the recommendation of their health-care provider; and (b) concern about the risks to their babies. Recommending that all pregnant women be tested regardless of their risk of infection, together with making the HIV test readily available to all women, are probably the two best ways of increasing the patients' participation in an HIV-screening program for pregnant women. ^
Resumo:
Given the migration premium previously identified in an impact evaluation approach, this paper asks the question of why migration is not more prominent, given such high premium associated with it. Using long-term household panel data drawn from rural Tanzania, Kagera for the period 1991-2004, this study aims to answer this question by exploring the contribution of education in the migration premium. By separating migrants into those that moved out of original villages but remained within Kagera and those who left the region, this study finds that, in consumption, the return on investment in education is higher at both destinations. However, whilst the higher return on education fully explains the gains associated with migration within Kagera, it only partly explains those of external migration. These findings suggest that welfare opportunities are higher at the destination and that an individual's limited investment in education plays a major role in preventing short-distance migration from becoming a significant source of raising welfare, which is not the case for long-distance migration. While education plays a role, it appears that other mechanisms may prohibit rural agents from exploiting the arbitrage opportunity when they migrate to the destination at a great distance from the source.
Resumo:
In the 2000s, the Philippines' local banking sector have conducted very conservative lending behavior and at the same time, gradually but continuously improved their profitability in terms of ROE (return on equity). A set of analyses on the flow of funds and segment reports (information) of local universal banks, whose loans outstanding to the industrial sector have dominated more than three fourths of the total outstanding, shows that (1) they have actively manage assets overseas, (2) their profitability has come from investment activities in the securities markets, and (3) some universal banks have shifted their resources into the consumer/retail segment. Although further refinement in the dataset is needed for a more detailed analysis, diverse business strategies would be expected among the local universal banks in the near future.
Resumo:
This study presents an empirical analysis about corporate governance of financial institutions in United Arab Emirates (UAE). The purpose of this research is to analyze the influence of the structure of board of directors on the performance of these institutions. To examine the effect of control exerted by particular families on bank management, we estimated models where the dependent variable is return on assets (ROA) and return on equity (ROE), independent variables are board of directors variables, and control variables are bank management variables. Our results show that the control of corporate governance by a ruler's family within a board of directors has a positive effect on bank profitability. Our results indicate that control by a ruler's family through a bank's board of directors compensates for the inadequacy of UAE's corporate governance system.
Resumo:
The improvement of energy efficiency in existing buildings is always a challenge due to their particular, and sometimes protected, constructive solutions. New constructive regulations in Spain leave a big undefined gap when a restoration is considered because they were developed for new buildings. However, rehabilitation is considered as an opportunity for many properties because it allows owners to obtain benefits from the use of the buildings. The current financial and housing crisis has turned society point of view to existing buildings and making them more efficient is one of the Spanish government’s aims. The economic viability of a rehabilitation action should take all factors into account: both construction costs and the future operative costs of the building must be considered. Nevertheless, the application of these regulations in Spain is left to the designer’s opinion and always under a subjective point of view. With the research work described in this paper and with the help of some case-studies, the cost of adapting an existing building to the new constructive regulations will be studied and Energetic Efficiency will be evaluated depending on how the investment is recovered. The interest of the research is based on showing how new constructive solutions can achieve higher levels of efficiency in terms of energy, construction and economy and it will demonstrate that Life Cycle Costing analysis can be a mechanism to find the advantages and disadvantages of using these new constructive solutions. Therefore, this paper has the following objectives: analysing constructive solutions in existing buildings - to establish a process for assessing total life cycle costs (LCC) during the planning stages with consideration of future operating costs - to select the most advantageous operating system – To determine the return on investment in terms of construction costs based on new techniques, the achieved energy savings and investment payback periods.
Resumo:
El impacto negativo que tienen los virus en las plantas hace que estos puedan ejercer un papel ecológico como moduladores de la dinámica espacio-temporal de las poblaciones de sus huéspedes. Entender cuáles son los mecanismos genéticos y los factores ambientales que determinan tanto la epidemiología como la estructura genética de las poblaciones de virus puede resultar de gran ayuda para la comprensión del papel ecológico de las infecciones virales. Sin embargo, existen pocos trabajos experimentales que hayan abordado esta cuestión. En esta tesis, se analiza el efecto de la heterogeneidad del paisaje sobre la incidencia de los virus y la estructura genética de sus poblaciones. Asimismo, se explora como dichos factores ambientales influyen en la importancia relativa que los principales mecanismos de generación de variabilidad genética (mutación, recombinación y migración) tienen en la evolución de los virus. Para ello se ha usado como sistema los begomovirus que infectan poblaciones de chiltepín (Capsicum annuum var. aviculare (Dierbach) D´Arcy & Eshbaugh) en México. Se analizó la incidencia de diferentes virus en poblaciones de chiltepín distribuidas a lo largo de seis provincias biogeográficas, representando el área de distribución de la especie en México, y localizadas en hábitats con diferente grado de intervención humana: poblaciones sin intervención humana (silvestres); poblaciones toleradas (lindes y pastizales), y poblaciones manejadas por el hombre (monocultivos y huertos familiares). Entre los virus analizados, los begomovirus mostraron la mayor incidencia, detectándose en todas las poblaciones y años de muestreo. Las únicas dos especies de begomovirus que se encontraron infectando al chiltepín fueron: el virus del mosaico dorado del chile (Pepper golden mosaic virus, PepGMV) y el virus huasteco del amarilleo de venas del chile (Pepper huasteco yellow vein virus, PHYVV). Por ello, todos los análisis realizados en esta tesis se centran en estas dos especies de virus. La incidencia de PepGMV y PHYVV, tanto en infecciones simples como mixtas, aumento cuanto mayor fue el nivel de intervención humana en las poblaciones de chiltepín, lo que a su vez se asoció con una menor biodiversidad y una mayor densidad de plantas. Además, la incidencia de infecciones mixtas, altamente relacionada con la presencia de síntomas, fue también mayor en las poblaciones cultivadas. La incidencia de estos dos virus también varió en función de la población de chiltepín y de la provincia biogeográfica. Por tanto, estos resultados apoyan una de las hipótesis XVI clásicas de la Patología Vegetal según la cual la simplificación de los ecosistemas naturales debida a la intervención humana conduce a un mayor riesgo de enfermedad de las plantas, e ilustran sobre la importancia de la heterogeneidad del paisaje a diferentes escalas en la determinación de patrones epidemiológicos. La heterogeneidad del paisaje no solo afectó a la epidemiología de PepGMV y PHYVV, sino también a la estructura genética de sus poblaciones. En ambos virus, el nivel de diferenciación genética mayor fue la población, probablemente asociado a la capacidad de migración de su vector Bemisia tabaci; y en segundo lugar la provincia biogeográfica, lo que podría estar relacionado con el papel del ser humano como agente dispersor de PepGMV y PHYVV. La estima de las tasas de sustitución nucleotídica de las poblaciones de PepGMV y PHYVV mostró una rápida dinámica evolutiva. Los árboles filogenéticos de ambos virus presentaron una topología en estrella, lo que sugiere una expansión reciente en las poblaciones de chiltepín. La reconstrucción de los patrones de migración de ambos virus indicó que ésta expansión parece haberse producido desde la zona central de México siguiendo un patrón radial, y en los últimos 30 años. Es importante tener en cuenta que el patrón espacial de la diversidad genética de las poblaciones de PepGMV y PHYVV es similar al descrito previamente para el chiltepín lo que podría dar lugar a la congruencia de las genealogías del huésped y la de los virus. Dicha congruencia se encontró cuando se tuvieron en cuenta únicamente las poblaciones de hábitats silvestres y tolerados, lo que probablemente se debe a una codivergencia en el espacio pero no en el tiempo, dado que la evolución de virus y huésped han ocurrido a escalas temporales muy diferentes. Finalmente, el análisis de la frecuencia de recombinación en PepGMV y PHYVV indicó que esta juega un papel importante en la evolución de ambos virus, dependiendo su importancia del nivel de intervención humana de la población de chiltepín. Este factor afectó también a la intensidad de la selección a la que se ven sometidos los genomas de PepGMV y PHYVV. Los resultados de esta tesis ponen de manifiesto la importancia que la reducción de la biodiversidad asociada al nivel de intervención humana de las poblaciones de plantas y la heterogeneidad del paisaje tiene en la emergencia de nuevas enfermedades virales. Por tanto, es necesario considerar estos factores ambientales a la hora de comprender la epidemiologia y la evolución de los virus de plantas.XVII SUMMARY Plant viruses play a key role as modulators of the spatio-temporal dynamics of their host populations, due to their negative impact in plant fitness. Knowledge on the genetic and environmental factors that determine the epidemiology and the genetic structure of virus populations may help to understand the ecological role of viral infections. However, few experimental works have addressed this issue. This thesis analyses the effect of landscape heterogeneity in the prevalence of viruses and the genetic structure of their populations. Also, how these environmental factors influence the relative importance of the main mechanisms for generating genetic variability (mutation, recombination and migration) during virus evolution is explored. To do so, the begomoviruses infecting chiltepin (Capsicum annuum var. aviculare (Dierbach) D'Arcy & Eshbaugh) populations in Mexico were used. Incidence of different viruses in chiltepin populations of six biogeographical provinces representing the species distribution in Mexico was determined. Populations belonged to different habitats according to the level of human management: populations with no human intervention (Wild); populations naturally dispersed and tolerated in managed habitats (let-standing), and human managed populations (cultivated). Among the analyzed viruses, the begomoviruses showed the highest prevalence, being detected in all populations and sampling years. Only two begomovirus species infected chiltepin: Pepper golden mosaic virus, PepGMV and Pepper huasteco yellow vein virus, PHYVV. Therefore, all the analyses presented in this thesis are focused in these two viruses. The prevalence of PepGMV and PHYVV, in single and mixed infections, increased with higher levels of human management of the host population, which was associated with decreased biodiversity and increased plant density. Furthermore, cultivated populations showed higher prevalence of mixed infections and symptomatic plants. The prevalence of the two viruses also varied depending on the chiltepin population and on the biogeographical province. Therefore, these results support a classical hypothesis of Plant Pathology stating that simplification of natural ecosystems due to human management leads to an increased disease risk, and illustrate on the importance of landscape heterogeneity in determining epidemiological patterns. Landscape heterogeneity not only affected the epidemiology of PepGMV and PHYVV, but also the genetic structure of their populations. Both viruses had the highest level of genetic differentiation at the population scale, probably associated with the XVIII migration patterns of its vector Bemisia tabaci, and a second level at the biogeographical province scale, which could be related to the role of humans as dispersal agents of PepGMV and PHYVV. The estimates of nucleotide substitution rates of the virus populations indicated rapid evolutionary dynamics. Accordingly, phylogenetic trees of both viruses showed a star topology, suggesting a recent diversification in the chiltepin populations. Reconstruction of PepGMV and PHYVV migration patterns indicated that they expanded from central Mexico following a radial pattern during the last 30 years. Importantly, the spatial genetic structures of the virus populations were similar to that described previously for the chiltepin, which may result in the congruence of the host and virus genealogies. Such congruence was found only in wild and let-standing populations. This is probably due to a co-divergence in space but not in time, given the different evolutionary time scales of the host and virus populations. Finally, the frequency of recombination detected in the PepGMV and PHYVV populations indicated that this mechanism plays an important role in the evolution of both viruses at the intra-specific scale. The level of human management had a minor effect on the frequency of recombination, but influenced the strength of negative selective pressures in the viral genomes. The results of this thesis highlight the importance of decreased biodiversity in plant populations associated with the level of human management and of landscape heterogeneity on the emergence of new viral diseases. Therefore it is necessary to consider these environmental factors in order to fully understand the epidemiology and evolution of plant viruses.
Resumo:
Following recent accounting and ethical scandals within the Telecom Industry like Gowex case, old cards are laid on the table: what kind of management and control are we doing on our businesses and what use do we give to the specific tools we have at our disposition? There are indicators, that on a very specific, concise and accurate manner, aside from brief, allow us to analyze and capture the complexity of a business and also they constitute an important support when making optimal decisions. These instruments or indicators show, a priori, all relevant data from a purely economic perspective, while there also exist, the possibility of including factors that are not of this nature strictly. For instance, there are indicators that take into account the customer?s satisfaction, the corporate reputation among others. Both kind of performance indicators form, together, an integral dashboard while the pure economic side of it could be considered as a basic dashboard. Based on DuPont?s methodology, we will be able to calculate the ROI (Return on Investment) of a company from the disaggregation of very useful and much needed indicators like the ROE (Return on Equity) or the ROA (Return on Assets); thereby, we will be able to get to know, to control and, hence, to optimize the company?s leverage level, its liquidity ratio or its solvency ratio, among others; as well as the yield we will be able to obtain if our decisions and management are optimal related to the bodies of assets. Bear in mind and make the most of the abovementioned management tools and indicators that we have at our disposition, allow us to act knowing our path and taking full responsibility, as well as, to obtain the maximum planned benefits, instead of leaving them to be casual. We will be able to avoid errors that can lead the company to an unfortunate and non-desirable situation and, of course, we will detect, way in advance, the actual needs of the business in terms of accounting and financial sanitation before irreversible situations are reached.
Resumo:
There exist different ways for defining a welfare function. Traditionally, welfare economic theory foundation is based on the Net Present Value (NPV) calculation where the time dependent preferences of considered agents are taken into account. However, the time preferences, remains a controversial subject. Currently, the traditional approach employs a unique discount rate for various agents. Nevertheless, this way of discounting appears inconsistent with sustainable development. New research work suggests that the discount rate may not be a homogeneous value. The discount rates may change following the individual’s preferences. A significant body of evidence suggests that people do not behave following a constant discount rate. In fact, UK Government has quickly recognized the power of the arguments for time-varying rates, as it has done in its official guidance to Ministries on the appraisal of investments and policies. Other authors deal with not just time preference but with uncertainty about future income (precautionary saving). In a situation in which economic growth rates are similar across time periods, the rationale for declining social optimal discount rates is driven by the preferences of the individuals in the economy, rather than expectations of growth. However, these approaches have been mainly focused on long-term policies where intergenerational risks may appear. The traditional cost-benefit analysis (CBA) uses a unique discount rate derived from market interest rates or investment rates of return for discounting the costs and benefits of all social agents included in the CBA. However, recent literature showed that a more adequate measure of social benefit is possible by using different discount rates including inter-temporal preferences rate of users, private investment discount rate and intertemporal preferences rate of government. Actually, the costs of opportunity may differ amongst individuals, firms, governments, or society in general, as do the returns on savings. In general, the firms or operators require an investment rate linked to the current return on savings, while the discount rate of consumers-users depends on their time preferences with respect of the current and the future consumption, as well as society can take into account the intergenerational well-being, adopting a lower discount rate for today’s generation. Time discount rate of social actors (users, operators, government and society) places a lower value in a future gain, but the uncertainty about future income strongly determines the individual preferences. These time and uncertainty depends on preferences and should be integrated into a transport policy formulation that may have significant social impacts. The discount rate of a user cannot be the same than the operator’s discount rate. The preferences of both are different. In addition, another school of thought suggests that people, such as a social group, may have different attitudes towards future costs and benefits. Particularly, the users have different discount rates related to their income. Some research work tried to modify user discount rates using a compensating weight which represents the inverse of household income level. The inter-temporal preferences are a proxy of the willingness to pay during the time. Its consideration is important in order to make acceptable or not a policy or investment
Resumo:
El transporte aéreo es un sector estratégico para el crecimiento económico de cualquier país. La estabilidad y el desarrollo de este modo de transporte tienen un pilar fundamental en una operación segura, especialmente cuando las previsiones indican escenarios de crecimiento continuo del tráfico aéreo. La estimación del riesgo y, por tanto, del nivel de seguridad de un entorno operativo se ha basado en métodos indirectos como puede ser la cuantificación y análisis de los reportes voluntarios de incidentes o el uso de modelos de riesgo de colisión enfocados a escenarios operativos parciales, como puede ser un espacio aéreo oceánico. La operación en un área terminal de maniobra es compleja, con distintos flujos de tráfico de arribada y salida a uno o varios aeropuertos, con cambios frecuentes en el rumbo y velocidad de las aeronaves y con instrucciones tácticas del control de tráfico aéreo para secuenciar y separar las aeronaves El objetivo de la presente Tesis es complementar los actuales métodos de monitorización de la seguridad que presentan sus limitaciones, con el desarrollo de un modelo de riesgo de colisión para áreas terminales de alta densidad que se base en datos objetivos como son las trazar radar de las aeronaves y que tenga en cuenta la complejidad de la operación en un área terminal. Para evaluar el modelo desarrollado se ha implementado una herramienta prototipo en MATLAB© que permite procesar un número masivo de trazar radar para un escenario de área terminal y calcular un valor del riesgo de colisión para el escenario analizado. El prototipo ha sido utilizado para estimar la probabilidad de colisión para distintos escenarios del área terminal de Madrid. El uso de trazas radar permite monitorizar el nivel de riesgo de escenarios reales de manera periódica estableciendo niveles de alerta temprana si se detecta que el valor de riesgo se desvía en exceso, pero también permite evaluar el nivel de riesgo de diseños de espacio aéreo o de nuevos modos de operación a partir de las trazas radar obtenidas en las simulaciones en tiempo real o acelerado y actuar en fases tempranas de los proyectos. ABSTRACT The air transport is a strategic sector for the economic growth of any country. The stability and development of the transport mode have a fundamental pillar in a safe operation, especially when long-term forecasts show scenarios of continuous growth in air traffic. Risk estimation and therefore the level of safety in an operational airspace has been based on indirect methods such as the quantification and analysis of voluntary reports of safety incidents or use of collision risk models focused on partial or simple operational scenarios such as an oceanic airspace. The operation on a terminal maneuvering area is complex, with different traffic flows of arrival and departure at one or more airports, with frequent changes in direction and speed of aircraft and tactical instructions of air traffic control to sequence and separate aircraft. The objective of this Thesis is to complement existing methods of monitoring safety that have their limitations, with the development of a collision risk model for high-density terminal areas that is based on objective data such as aircraft radar tracks and taking into account the complexity of the operation in a terminal area. To evaluate the developed model a prototype tool was implemented with MATLAB© that can process massive numbers of radar tracks for a terminal area scenario and computing a collision risk value for that scenario. The prototype has been used to estimate the probability of collision for different scenarios of the terminal area of Madrid. The use of radar tracks allows to monitor the level of risk of real scenarios periodically establishing levels of early warning when the risk value deviates too much, but also to assess the risk level of airspace designs or modes of operations from the radar tracks obtained in real or fast time simulations and act in the early stages of projects.
Resumo:
La presente investigación tiene como objetivo principal diseñar un Modelo de Gestión de Riesgos Operacionales (MGRO) según las Directrices de los Acuerdos II y III del Comité de Supervisión Bancaria de Basilea del Banco de Pagos Internacionales (CSBB-BPI). Se considera importante realizar un estudio sobre este tema dado que son los riesgos operacionales (OpR) los responsables en gran medida de las últimas crisis financieras mundiales y por la dificultad para detectarlos en las organizaciones. Se ha planteado un modelo de gestión subdividido en dos vías de influencias. La primera acoge el paradigma holístico en el que se considera que hay múltiples maneras de percibir un proceso cíclico, así como las herramientas para observar, conocer y entender el objeto o sujeto percibido. La segunda vía la representa el paradigma totalizante, en el que se obtienen datos tanto cualitativos como cuantitativos, los cuales son complementarios entre si. Por otra parte, este trabajo plantea el diseño de un programa informático de OpR Cualitativo, que ha sido diseñado para determinar la raíz de los riesgos en las organizaciones y su Valor en Riesgo Operacional (OpVaR) basado en el método del indicador básico. Aplicando el ciclo holístico al caso de estudio, se obtuvo el siguiente diseño de investigación: no experimental, univariable, transversal descriptiva, contemporánea, retrospectiva, de fuente mixta, cualitativa (fenomenológica y etnográfica) y cuantitativa (descriptiva y analítica). La toma de decisiones y recolección de información se realizó en dos fases en la unidad de estudio. En la primera se tomó en cuenta la totalidad de la empresa Corpoelec-EDELCA, en la que se presentó un universo estadístico de 4271 personas, una población de 2390 personas y una unidad de muestreo de 87 personas. Se repitió el proceso en una segunda fase, para la Central Hidroeléctrica Simón Bolívar, y se determinó un segundo universo estadístico de 300 trabajadores, una población de 191 personas y una muestra de 58 profesionales. Como fuentes de recolección de información se utilizaron fuentes primarias y secundarias. Para recabar la información primaria se realizaron observaciones directas, dos encuestas para detectar las áreas y procesos con mayor nivel de riesgos y se diseñó un cuestionario combinado con otra encuesta (ad hoc) para establecer las estimaciones de frecuencia y severidad de pérdidas operacionales. La información de fuentes secundarias se extrajo de las bases de datos de Corpoelec-EDELCA, de la IEA, del Banco Mundial, del CSBB-BPI, de la UPM y de la UC at Berkeley, entre otras. Se establecieron las distribuciones de frecuencia y de severidad de pérdidas operacionales como las variables independientes y el OpVaR como la variable dependiente. No se realizó ningún tipo de seguimiento o control a las variables bajo análisis, ya que se consideraron estas para un instante especifico y solo se determinan con la finalidad de establecer la existencia y valoración puntual de los OpR en la unidad de estudio. El análisis cualitativo planteado en el MGRO, permitió detectar que en la unidad de investigación, el 67% de los OpR detectados provienen de dos fuentes principales: procesos (32%) y eventos externos (35%). Adicionalmente, la validación del MGRO en Corpoelec-EDELCA, permitió detectar que el 63% de los OpR en la organización provienen de tres categorías principales, siendo los fraudes externos los presentes con mayor regularidad y severidad de pérdidas en la organización. La exposición al riesgo se determinó fundamentándose en la adaptación del concepto de OpVaR que generalmente se utiliza para series temporales y que en el caso de estudio presenta la primicia de aplicarlo a datos cualitativos transformados con la escala Likert. La posibilidad de utilizar distribuciones de probabilidad típicas para datos cuantitativos en distribuciones de frecuencia y severidad de pérdidas con datos de origen cualitativo fueron analizadas. Para el 64% de los OpR estudiados se obtuvo que la frecuencia tiene un comportamiento semejante al de la distribución de probabilidad de Poisson y en un 55% de los casos para la severidad de pérdidas se obtuvo a las log-normal como las distribuciones de probabilidad más comunes, con lo que se concluyó que los enfoques sugeridos por el BCBS-BIS para series de tiempo son aplicables a los datos cualitativos. Obtenidas las distribuciones de frecuencia y severidad de pérdidas, se convolucionaron estas implementando el método de Montecarlo, con lo que se obtuvieron los enfoques de distribuciones de pérdidas (LDA) para cada uno de los OpR. El OpVaR se dedujo como lo sugiere el CSBB-BPI del percentil 99,9 o 99% de cada una de las LDA, obteniéndose que los OpR presentan un comportamiento similar al sistema financiero, resultando como los de mayor peligrosidad los que se ubican con baja frecuencia y alto impacto, por su dificultad para ser detectados y monitoreados. Finalmente, se considera que el MGRO permitirá a los agentes del mercado y sus grupos de interés conocer con efectividad, fiabilidad y eficiencia el status de sus entidades, lo que reducirá la incertidumbre de sus inversiones y les permitirá establecer una nueva cultura de gestión en sus organizaciones. ABSTRACT This research has as main objective the design of a Model for Operational Risk Management (MORM) according to the guidelines of Accords II and III of the Basel Committee on Banking Supervision of the Bank for International Settlements (BCBS- BIS). It is considered important to conduct a study on this issue since operational risks (OpR) are largely responsible for the recent world financial crisis and due to the difficulty in detecting them in organizations. A management model has been designed which is divided into two way of influences. The first supports the holistic paradigm in which it is considered that there are multiple ways of perceiving a cyclical process and contains the tools to observe, know and understand the subject or object perceived. The second way is the totalizing paradigm, in which both qualitative and quantitative data are obtained, which are complementary to each other. Moreover, this paper presents the design of qualitative OpR software which is designed to determine the root of risks in organizations and their Operational Value at Risk (OpVaR) based on the basic indicator approach. Applying the holistic cycle to the case study, the following research design was obtained: non- experimental, univariate, descriptive cross-sectional, contemporary, retrospective, mixed-source, qualitative (phenomenological and ethnographic) and quantitative (descriptive and analytical). Decision making and data collection was conducted in two phases in the study unit. The first took into account the totality of the Corpoelec-EDELCA company, which presented a statistical universe of 4271 individuals, a population of 2390 individuals and a sampling unit of 87 individuals. The process was repeated in a second phase to the Simon Bolivar Hydroelectric Power Plant, and a second statistical universe of 300 workers, a population of 191 people and a sample of 58 professionals was determined. As sources of information gathering primary and secondary sources were used. To obtain the primary information direct observations were conducted and two surveys to identify the areas and processes with higher risks were designed. A questionnaire was combined with an ad hoc survey to establish estimates of frequency and severity of operational losses was also considered. The secondary information was extracted from the databases of Corpoelec-EDELCA, IEA, the World Bank, the BCBS-BIS, UPM and UC at Berkeley, among others. The operational loss frequency distributions and the operational loss severity distributions were established as the independent variables and OpVaR as the dependent variable. No monitoring or control of the variables under analysis was performed, as these were considered for a specific time and are determined only for the purpose of establishing the existence and timely assessment of the OpR in the study unit. Qualitative analysis raised in the MORM made it possible to detect that in the research unit, 67% of detected OpR come from two main sources: external processes (32%) and external events (35%). Additionally, validation of the MORM in Corpoelec-EDELCA, enabled to estimate that 63% of OpR in the organization come from three main categories, with external fraud being present more regularly and greater severity of losses in the organization. Risk exposure is determined basing on adapting the concept of OpVaR generally used for time series and in the case study it presents the advantage of applying it to qualitative data transformed with the Likert scale. The possibility of using typical probability distributions for quantitative data in loss frequency and loss severity distributions with data of qualitative origin were analyzed. For the 64% of OpR studied it was found that the frequency has a similar behavior to that of the Poisson probability distribution and 55% of the cases for loss severity it was found that the log-normal were the most common probability distributions. It was concluded that the approach suggested by the BCBS-BIS for time series can be applied to qualitative data. Once obtained the distributions of loss frequency and severity have been obtained they were subjected to convolution implementing the Monte Carlo method. Thus the loss distribution approaches (LDA) were obtained for each of the OpR. The OpVaR was derived as suggested by the BCBS-BIS 99.9 percentile or 99% of each of the LDA. It was determined that the OpR exhibits a similar behavior to the financial system, being the most dangerous those with low frequency and high impact for their difficulty in being detected and monitored. Finally, it is considered that the MORM will allows market players and their stakeholders to know with effectiveness, efficiency and reliability the status of their entities, which will reduce the uncertainty of their investments and enable them to establish a new management culture in their organizations.
Resumo:
El deterioro del hormigón por ciclos de hielo-deshielo en presencia de sales fundentes es causa frecuente de problemas en los puentes e infraestructuras existentes en los países europeos. Los daños producidos por los ciclos de hielo-deshielo en el hormigón pueden ser internos, fundamentalmente la fisuración y/o externos como el descascarillamiento (desgaste superficial). La España peninsular presenta unas características geográficas y climáticas particulares. El 18% de la superficie tiene una altura superior a 1000mts y, además, la altura media geográfica con respecto al nivel del mar es de 660mts (siendo el segundo país más montañoso de toda Europa).Esto hace que la Red de Carreteras del Estado se vea afectada, durante determinados periodos, por fenómenos meteorológicos adversos, en particular por nevadas y heladas, que pueden comprometer las condiciones de vialidad para la circulación de vehículos. Por este motivo la Dirección General de Carreteras realiza trabajos anualmente (campañas de vialidad invernal, de 6 meses de duración) para el mantenimiento de la vialidad de las carreteras cuando éstas se ven afectadas por estos fenómenos. Existen protocolos y planes operativos que permiten sistematizar estos trabajos de mantenimiento que, además, se han intensificado en los últimos 10 años, y que se fundamentan en el empleo de sales fundentes, principalmente NaCl, con la misión de que no haya placas de hielo, ni nieve, en las carreteras. En zonas de fuerte oscilación térmica, que con frecuencia en España se localizan en la zona central del Pirineo, parte de la cornisa Cantábrica y Sistema Central, se producen importantes deterioros en las estructuras y paramentos de hormigón producidos por los ciclos de hielo- deshielo. Pero además el uso de fundentes de vialidad invernal acelera en gran medida la evolución de estos daños. Los tableros de hormigón de puentes de carretera de unos 40-50 años de antigüedad carecen, en general, de un sistema de impermeabilización, y están formados frecuentemente por un firme de mezcla asfáltica, una emulsión adherente y el hormigón de la losa. En la presente tesis se realiza una investigación que pretende reproducir en laboratorio los procesos que tienen lugar en el hormigón de tableros de puentes existentes de carreteras, de unos 40-50 años de antigüedad, que están expuestos durante largos periodos a sales fundentes, con objeto de facilitar la vialidad invernal, y a cambios drásticos de temperatura (hielo y deshielo). Por ello se realizaron cuatro campañas de investigación, teniendo en cuenta que, si bien nos basamos en la norma europea UNE-CEN/TS 12390-9 “Ensayos de hormigón endurecido. Resistencia al hielo-deshielo. Pérdida de masa”, se fabricaron probetas no estandarizadas para este ensayo, pensado en realidad para determinar la afección de los ciclos únicamente a la pérdida de masa. Las dimensiones de las probetas en nuestro caso fueron 150x300 mm, 75 x 150mm (cilíndricas normalizadas para roturas a compresión según la norma UNE-EN 12390-3) y 286x76x76 (prismáticas normalizadas para estudiar cambio de volumen según la norma ASTM C157), lo cual nos permitió realizar sobre las mismas probetas más ensayos, según se presentan en la tesis y, sobre todo, poder comparar los resultados con probetas extraídas de dimensiones similares en puentes existentes. En la primera campaña, por aplicación de la citada norma, se realizaron ciclos de H/D, con y sin contacto con sales de deshielo (NaCl en disolución del 3% según establece dicha norma). El hormigón fabricado en laboratorio, tratando de simular el de losas de tableros de puentes antiguos, presentó una fc de 22,6 MPa y relación agua/cemento de 0,65. Las probetas de hormigón fabricadas se sometieron a ciclos agresivos de hielo/deshielo (H/D), empleando una temperatura máxima de +20ºC y una temperatura mínima de -20ºC al objeto de poder determinar la sensibilidad de este ensayo tanto al tipo de hormigón elaborado como al tipo de probeta fabricado (cilíndrica y prismática). Esta campaña tuvo una segunda fase para profundizar más en el comportamiento de las probetas sometidas a ciclos H/D en presencia de sales. En la segunda campaña, realizada sobre probetas de hormigón fabricadas en laboratorio iguales a las anteriores, la temperaturas mínima del ensayo se subió a -14ºC, lo que nos permitió analizar el proceso de deterioro con más detalle. (Realizando una serie de ensayos de caracterización no destructivos y otros destructivos, y validando su aplicación a la detección de los deterioros causados tras los ensayos acelerados de hielodeshielo. También mediante aplicación de técnicas de microscopía electrónica.) La tercera campaña, se realizó sobre probetas de hormigón de laboratorio similares a las anteriores, fc de 29,3Mpa y relación a/c de 0,65, en las que se aplicó en una cara un revestimiento asfáltico de 2-4cms, según fueran prismáticas y cilíndricas respectivamente, compuesto por una mezcla asfáltica real (AC16), sobre una imprimación bituminosa. (Para simular el nivel de impermeabilización que produce un firme sobre el tablero de un puente) La cuarta campaña, se desarrolló tras una cuidadosa selección de dos puentes de hormigón de 40-50 años de antigüedad, expuestos y sensibles a deterioros de hielodeshielo, y en carreteras con aportación de fundentes. Una vez esto se extrajeron testigos de hormigón de zonas sanas (nervios del tablero), para realizar en laboratorio los mismos ensayos acelerados de hielo-deshielo y de caracterización, de la segunda campaña, basados en la misma norma. De los resultados obtenidos se concluye que cuando se emplean sales fundentes se acelera de forma significativa el deterioro, aumentando tanto el contenido de agua en los poros como el gradiente generado (mecanismo de deterioro físico). Las sales de deshielo aceleran claramente la aparición del daño, que se incrementa incluso en un factor de 5 según se constata en esta investigación para los hormigones ensayados. Pero además se produce un gradiente de cloruros que se ha detectado tanto en los hormigones diseñados en laboratorio como en los extraídos de puentes existentes. En casi todos los casos han aparecido cambios en la microestructura de la pasta de cemento (mecanismo de deterioro químico), confirmándose la formación de un compuesto en el gel CSH de la pasta de cemento, del tipo Ca2SiO3Cl2, que posiblemente está contribuyendo a la alteración de la pasta y a la aceleración de los daños en presencia de sales fundentes. Existe un periodo entre la aparición de fisuración y la pérdida de masa. Las fisuras progresan rápidamente desde la interfase de los áridos más pequeños y angulosos, facilitando así el deterioro del hormigón. Se puede deducir así que el tipo de árido afecta al deterioro. En el caso de los testigos con recubrimiento asfáltico, parece haberse demostrado que la precipitación de sales genera tensiones en las zonas de hormigón cercanas al recubrimiento, que terminan por fisurar el material. Y se constata que el mecanimo de deterioro químico, probablemente tenga más repercusión que el físico, por cuanto el recubrimiento asfáltico es capaz de retener suficiente agua, como para que el gradiente de contenido de agua en el hormigón sea mucho menor que sin el recubrimiento. Se constató, sin embargo, la importancia del gradiente de cloruros en el hormigon. Por lo que se deduce que si bien el recubrimiento asfáltico es ciertamente protector frente a los ciclos H/D, su protección disminuye en presencia de sales; es decir, los cloruros acabarán afectando al hormigón del tablero del puente. Finalmente, entre los hormigones recientes y los antiguos extraídos de puentes reales, se observa que existen diferencias significativas en cuanto a la resistencia a los ciclos H/D entre ellos. Los hormigones más recientes resultan, a igualdad de propiedades, más resistentes tanto a ciclos de H/D en agua como en sales. Posiblemente el hecho de que los hormigones de los puentes hayan estado expuestos a condiciones de temperaturas extremas durante largos periodos de tiempo les ha sensibilizado. La tesis realizada, junto con nuevos contrastes que se realicen en el futuro, nos permitirá implementar una metodología basada en la extracción de testigos de tableros de puente reales para someterlos a ensayos de hielo-deshielo, basados en la norma europea UNECEN/ TS 12390-9 aunque con probetas no normalizadas para el mismo, y, a su vez, realizar sobre estas probetas otros ensayos de caracterización destructivos, que posibilitarán evaluar los daños ocasionados por este fenómeno y su evolución temporal, para actuar consecuentemente priorizando intervenciones de impermeabilización y reparación en el parque de puentes de la RCE. Incluso será posible la elaboración de mapas de riesgo, en función de las zonas de climatología más desfavorable y de los tratamientos de vialidad invernal que se lleven a cabo. Concrete damage by freeze-thaw cycles in the presence of melting salts frequently causes problems on bridges and infrastructures in European countries. Damage caused by freeze-thaw cycles in the concrete can be internal, essentially cracking and / or external as flaking (surface weathering due to environmental action). The peninsular Spain presents specific climatic and geographical characteristics. 18% of the surface has a height greater than 1,000 m and the geographical average height from the sea level is 660 m (being the second most mountainous country in Europe). This makes the National Road Network affected during certain periods due to adverse weather, particularly snow and ice, which can compromise road conditions for vehicular traffic. For this reason the National Road Authority performs works annually (Winter Road Campaign, along 6 months) to maintain the viability of the roads when they are affected by these phenomena. There are protocols and operational plans that allow systematize these maintenance jobs, that also have intensified in the last 10 years, and which are based on the use of deicing salts, mainly NaCl, with the mission that no ice sheets, or snow appear on the roads. In areas of strong thermal cycling, which in Spain are located in the central area of the Pyrenees, part of the Cantabrian coast and Central System, significant deterioration take place in the structures and wall surfaces of concrete due to freeze-thaw. But also the use of deicing salts for winter maintenance greatly accelerated the development of such damages. The concrete decks for road bridges about 40-50 years old, lack generally a waterproofing system, and are often formed by a pavement of asphalt, an adhesive emulsion and concrete slab. In this thesis the research going on aims to reproduce in the laboratory the processes taking place in the concrete of an existing deck at road bridges, about 40-50 years old, they are exposed for long periods to icing salt, to be performed in order to facilitate winter maintenance, and drastic temperature changes (freezing and thawing). Therefore four campaigns of research were conducted, considering that while we rely on the European standard UNE-CEN/TS 12390-9 "Testing hardened concrete. Freezethaw resistance. Mass loss", nonstandard specimens were fabricated for this test, actually conceived to determine the affection of the cycles only to the mass loss. Dimensions of the samples were in our case 150x300 mm, 75 x 150mm (standard cylindrical specimens for compression fractures UNE-EN 12390-3) and 286x76x76 (standard prismatic specimens to study volume change ASTM C157), which allowed us to carry on same samples more trials, as presented in the thesis, and especially to compare the results with similar sized samples taken from real bridges. In the first campaign, by application of that European standard, freeze-thaw cycles, with and without contact with deicing salt (NaCl 3% solution in compliance with such standard) were performed. Concrete made in the laboratory, trying to simulate the old bridges, provided a compressive strength of 22.6 MPa and water/cement ratio of 0.65. In this activity, the concrete specimens produced were subjected to aggressive freeze/thaw using a maximum temperature of +20ºC and a minimum temperature of - 20°C in order to be able to determine the sensitivity of this test to the concrete and specimens fabricated. This campaign had a second phase to go deeper into the behavior of the specimens subjected to cycled freeze/thaw in the presence of salts. In the second campaign, conducted on similar concrete specimens manufactured in laboratory, temperatures of +20ºC and -14ºC were used in the tests, which allowed us to analyze the deterioration process in more detail (performing a series of non-destructive testing and other destructive characterization, validating its application to the detection of the damage caused after the accelerated freeze-thaw tests, and also by applying electron microscopy techniques). The third campaign was conducted on concrete specimens similar to the above manufactured in laboratory, both cylindrical and prismatic, which was applied on one side a 4 cm asphalt coating, consisting of a real asphalt mixture, on a bituminous primer (for simulate the level of waterproofing that produces a pavement on the bridge deck). The fourth campaign was developed after careful selection of two concrete bridges 40- 50 years old, exposed and sensitive to freeze-thaw damage, in roads with input of melting salts. Concrete cores were extracted from healthy areas, for the same accelerated laboratory freeze-thaw testing and characterization made for the second campaign, based on the same standard. From the results obtained it is concluded that when melting salts are employed deterioration accelerates significantly, thus increasing the water content in the pores, as the gradient. Besides, chloride gradient was detected both in the concrete designed in the laboratory and in the extracted in existing bridges. In all cases there have been changes in the microstructure of the cement paste, confirming the formation of a compound gel CSH of the cement paste, Ca2SiO3Cl2 type, which is possibly contributing to impair the cement paste and accelerating the damage in the presence of melting salts. The detailed study has demonstrated that the formation of new compounds can cause porosity at certain times of the cycles may decrease, paradoxically, as the new compound fills the pores, although this phenomenon does not stop the deterioration mechanism and impairments increase with the number of cycles. There is a period between the occurrence of cracking and mass loss. Cracks progress rapidly from the interface of the smallest and angular aggregate, thus facilitating the deterioration of concrete. It can be deduced so the aggregate type affects the deterioration. The presence of melting salts in the system clearly accelerates the onset of damage, which increases even by a factor of 5 as can be seen in this investigation for concrete tested. In the case of specimens with asphalt coating, it seems to have demonstrated that the precipitation of salts generate tensions in the areas close to the concrete coating that end up cracking the material. It follows that while the asphalt coating is certainly a protection against the freeze/thaw cycles, this protection decreases in the presence of salts; so the chlorides will finally affect the concrete bridge deck. Finally, among the recent concrete specimens and the old ones extracted from real bridges, it is observed that the mechanical strengths are very similar to each other, as well as the porosity values and the accumulation capacity after pore water saturation. However, there are significant differences in resistance to freeze/thaw cycles between them. More recent concrete are at equal properties more resistant both cycles freeze/thaw in water with or without salts. Possibly the fact that concrete bridges have been exposed to extreme temperatures for long periods of time has sensitized them. The study, along with new contrasts that occur in the future, allow us to implement a methodology based on the extraction of cores from the deck of real bridges for submission to freeze-thaw tests based on the European standard UNE-CEN/TS 12390-9 even with non-standard specimens for it, and in turn, performed on these samples other destructive characterization tests, which will enable to assess the damage caused by this phenomenon and its evolution, to act rightly prioritizing interventions improving the waterproofing and other repairs in the bridge stock of the National Road Network. It will even be possible to develop risk maps, depending on the worst weather areas and winter road treatments to be carried out.
Resumo:
Determinar con buena precisión la posición en la que se encuentra un terminal móvil, cuando éste se halla inmerso en un entorno de interior (centros comerciales, edificios de oficinas, aeropuertos, estaciones, túneles, etc), es el pilar básico sobre el que se sustentan un gran número de aplicaciones y servicios. Muchos de esos servicios se encuentran ya disponibles en entornos de exterior, aunque los entornos de interior se prestan a otros servicios específicos para ellos. Ese número, sin embargo, podría ser significativamente mayor de lo que actualmente es, si no fuera necesaria una costosa infraestructura para llevar a cabo el posicionamiento con la precisión adecuada a cada uno de los hipotéticos servicios. O, igualmente, si la citada infraestructura pudiera tener otros usos distintos, además del relacionado con el posicionamiento. La usabilidad de la misma infraestructura para otros fines distintos ofrecería la oportunidad de que la misma estuviera ya presente en las diferentes localizaciones, porque ha sido previamente desplegada para esos otros usos; o bien facilitaría su despliegue, porque el coste de esa operación ofreciera un mayor retorno de usabilidad para quien lo realiza. Las tecnologías inalámbricas de comunicaciones basadas en radiofrecuencia, ya en uso para las comunicaciones de voz y datos (móviles, WLAN, etc), cumplen el requisito anteriormente indicado y, por tanto, facilitarían el crecimiento de las aplicaciones y servicios basados en el posicionamiento, en el caso de poderse emplear para ello. Sin embargo, determinar la posición con el nivel de precisión adecuado mediante el uso de estas tecnologías, es un importante reto hoy en día. El presente trabajo pretende aportar avances significativos en este campo. A lo largo del mismo se llevará a cabo, en primer lugar, un estudio de los principales algoritmos y técnicas auxiliares de posicionamiento aplicables en entornos de interior. La revisión se centrará en aquellos que sean aptos tanto para tecnologías móviles de última generación como para entornos WLAN. Con ello, se pretende poner de relieve las ventajas e inconvenientes de cada uno de estos algoritmos, teniendo como motivación final su aplicabilidad tanto al mundo de las redes móviles 3G y 4G (en especial a las femtoceldas y small-cells LTE) como al indicado entorno WLAN; y teniendo siempre presente que el objetivo último es que vayan a ser usados en interiores. La principal conclusión de esa revisión es que las técnicas de triangulación, comúnmente empleadas para realizar la localización en entornos de exterior, se muestran inútiles en los entornos de interior, debido a efectos adversos propios de este tipo de entornos como la pérdida de visión directa o los caminos múltiples en el recorrido de la señal. Los métodos de huella radioeléctrica, más conocidos bajo el término inglés “fingerprinting”, que se basan en la comparación de los valores de potencia de señal que se están recibiendo en el momento de llevar a cabo el posicionamiento por un terminal móvil, frente a los valores registrados en un mapa radio de potencias, elaborado durante una fase inicial de calibración, aparecen como los mejores de entre los posibles para los escenarios de interior. Sin embargo, estos sistemas se ven también afectados por otros problemas, como por ejemplo los importantes trabajos a realizar para ponerlos en marcha, y la variabilidad del canal. Frente a ellos, en el presente trabajo se presentan dos contribuciones originales para mejorar los sistemas basados en los métodos fingerprinting. La primera de esas contribuciones describe un método para determinar, de manera sencilla, las características básicas del sistema a nivel del número de muestras necesarias para crear el mapa radio de la huella radioeléctrica de referencia, junto al número mínimo de emisores de radiofrecuencia que habrá que desplegar; todo ello, a partir de unos requerimientos iniciales relacionados con el error y la precisión buscados en el posicionamiento a realizar, a los que uniremos los datos correspondientes a las dimensiones y realidad física del entorno. De esa forma, se establecen unas pautas iniciales a la hora de dimensionar el sistema, y se combaten los efectos negativos que, sobre el coste o el rendimiento del sistema en su conjunto, son debidos a un despliegue ineficiente de los emisores de radiofrecuencia y de los puntos de captura de su huella. La segunda contribución incrementa la precisión resultante del sistema en tiempo real, gracias a una técnica de recalibración automática del mapa radio de potencias. Esta técnica tiene en cuenta las medidas reportadas continuamente por unos pocos puntos de referencia estáticos, estratégicamente distribuidos en el entorno, para recalcular y actualizar las potencias registradas en el mapa radio. Un beneficio adicional a nivel operativo de la citada técnica, es la prolongación del tiempo de usabilidad fiable del sistema, bajando la frecuencia en la que se requiere volver a capturar el mapa radio de potencias completo. Las mejoras anteriormente citadas serán de aplicación directa en la mejora de los mecanismos de posicionamiento en interiores basados en la infraestructura inalámbrica de comunicaciones de voz y datos. A partir de ahí, esa mejora será extensible y de aplicabilidad sobre los servicios de localización (conocimiento personal del lugar donde uno mismo se encuentra), monitorización (conocimiento por terceros del citado lugar) y seguimiento (monitorización prolongada en el tiempo), ya que todos ellas toman como base un correcto posicionamiento para un adecuado desempeño. ABSTRACT To find the position where a mobile is located with good accuracy, when it is immersed in an indoor environment (shopping centers, office buildings, airports, stations, tunnels, etc.), is the cornerstone on which a large number of applications and services are supported. Many of these services are already available in outdoor environments, although the indoor environments are suitable for other services that are specific for it. That number, however, could be significantly higher than now, if an expensive infrastructure were not required to perform the positioning service with adequate precision, for each one of the hypothetical services. Or, equally, whether that infrastructure may have other different uses beyond the ones associated with positioning. The usability of the same infrastructure for purposes other than positioning could give the opportunity of having it already available in the different locations, because it was previously deployed for these other uses; or facilitate its deployment, because the cost of that operation would offer a higher return on usability for the deployer. Wireless technologies based on radio communications, already in use for voice and data communications (mobile, WLAN, etc), meet the requirement of additional usability and, therefore, could facilitate the growth of applications and services based on positioning, in the case of being able to use it. However, determining the position with the appropriate degree of accuracy using these technologies is a major challenge today. This paper provides significant advances in this field. Along this work, a study about the main algorithms and auxiliar techniques related with indoor positioning will be initially carried out. The review will be focused in those that are suitable to be used with both last generation mobile technologies and WLAN environments. By doing this, it is tried to highlight the advantages and disadvantages of each one of these algorithms, having as final motivation their applicability both in the world of 3G and 4G mobile networks (especially in femtocells and small-cells of LTE) and in the WLAN world; and having always in mind that the final aim is to use it in indoor environments. The main conclusion of that review is that triangulation techniques, commonly used for localization in outdoor environments, are useless in indoor environments due to adverse effects of such environments as loss of sight or multipaths. Triangulation techniques used for external locations are useless due to adverse effects like the lack of line of sight or multipath. Fingerprinting methods, based on the comparison of Received Signal Strength values measured by the mobile phone with a radio map of RSSI Recorded during the calibration phase, arise as the best methods for indoor scenarios. However, these systems are also affected by other problems, for example the important load of tasks to be done to have the system ready to work, and the variability of the channel. In front of them, in this paper we present two original contributions to improve the fingerprinting methods based systems. The first one of these contributions describes a method for find, in a simple way, the basic characteristics of the system at the level of the number of samples needed to create the radio map inside the referenced fingerprint, and also by the minimum number of radio frequency emitters that are needed to be deployed; and both of them coming from some initial requirements for the system related to the error and accuracy in positioning wanted to have, which it will be joined the data corresponding to the dimensions and physical reality of the environment. Thus, some initial guidelines when dimensioning the system will be in place, and the negative effects into the cost or into the performance of the whole system, due to an inefficient deployment of the radio frequency emitters and of the radio map capture points, will be minimized. The second contribution increases the resulting accuracy of the system when working in real time, thanks to a technique of automatic recalibration of the power measurements stored in the radio map. This technique takes into account the continuous measures reported by a few static reference points, strategically distributed in the environment, to recalculate and update the measurements stored into the map radio. An additional benefit at operational level of such technique, is the extension of the reliable time of the system, decreasing the periodicity required to recapture the radio map within full measurements. The above mentioned improvements are directly applicable to improve indoor positioning mechanisms based on voice and data wireless communications infrastructure. From there, that improvement will be also extensible and applicable to location services (personal knowledge of the location where oneself is), monitoring (knowledge by other people of your location) and monitoring (prolonged monitoring over time) as all of them are based in a correct positioning for proper performance.
Resumo:
Desde el siglo VIII hasta prácticamente el siglo XX, los molinos de marea han sido una fuente de desarrollo de las zonas en las que estaban implantados. En toda la costa atlántica de Europa, y posteriormente en América, se desarrollaron estos ingenios conectados por su naturaleza con los puertos. En ellos se procesaban materia prima que tenía su origen o destino en dichos puertos. La aparición de otras fuentes de energía más económicas y eficaces supuso la decadencia paulatina de estos ingenios, hasta la práctica desaparición de un buen número de ellos. En los últimos años, tanto instituciones privadas como públicas, especialmente ayuntamientos han mostrado un interés en conservar estos ingenios ya sea como edificación singular en el que se desarrollen distintos negocios, o como museos o “centros de interpretación” en los que se explica el funcionamiento del molino y su relación con la comarca de influencia. Este nuevo interés por el tema, unido a la necesidad de buscar nuevas fuentes de energías renovables para dar cumplimento a las condiciones del Tratado de Kyoto motivan la necesidad de un estudio de la aplicación de la energía minihidráulica a los antiguos molinos. En el presente documento se ha procedido en primer lugar a describir la historia de los molinos de marea y a continuación a localizarlos en cada provincia para identificar los posibles puntos de producción de energía. Seguidamente se procedió a identificar los diferentes tipos de turbinas aplicables a estos casos, de los que se han elegido dos de ellos, uno de ellos consolidado y el otro en fase experimental, para determinar y analizar la tendencia de determinadas magnitudes financieras en función de la carrera de marea. Las conclusiones resultantes de este análisis han sido que el sistema de funcionamiento mediante el flujo de la marea es menos productivo que mediante reflujo y que la altura de marea en las costas españolas limita considerablemente la rentabilidad de la inversión. Esta circunstancia obliga a hacer un estudio de viabilidad muy detallado de cada uno de los casos. Las investigaciones futuras en este campo se deben encaminar hacia el desarrollo de un nuevo tipo de miniturbina con una mayor regulación para obtener un mayor rendimiento, teniendo en cuenta, además, que el estuario de un molino de marea puede ser además un excelente banco de pruebas. Por otro lado, la posible producción de energía puede ser un elemento a estudiar dentro de un sistema de generación distribuida en el ámbito de una smart-city. From the eighth century until practically the twentieth century, the tide mills have been a source of development of the areas in which they were implanted. Across the Atlantic coast of Europe, and subsequently in America, these devices were developed and connected by its nature with the nearby ports. In these places the raw material, with its origin and destination from these ports, were processed. The emergence of other sources of energy more economic and efficient caused the gradual decline of these devices which led to the disappearance of a large number of them. In recent years, both private and public institutions, especially municipalities, have shown interest in preserving these devices as singular buildings, or as museums or "visitor centers" where the process of milling is explained and also its relationship with the region of influence. This renewed interest in the subject, coupled with the need of finding new sources of renewable energy in order to comply with the conditions of the Kyoto Treaty, has created the need for a study of the possible implementation of small hydro power in the old mills. In the present document, first we have proceeded to describe the history of the tide mills and afterwards we have located them in the Spanish provinces to identify the possible locations of energy generation. In the next step, we proceeded to identify the different types of turbines suitable to these cases, and we have been selected two of them. The first one is consolidated and the second one is in an experimental phase. With these pair of turbines we have determined and analyzed the outcome of certain financial data depending on the tidal range. The conclusions drawn from this analysis are that operating the system by the flow tide is less productive than by ebb tide. The limited height of tide in the Spanish coast considerably limits the return on investment. This outcome forces potential investors to make a very detailed analysis of each case study. Future researches in this field should be guided towards the development of a new type of mini turbine with more regulation for a higher performance, taking into account also that the basin of a tidal mill can also be an excellent test place. Furthermore, the potential energy output can be a factor to consider in a distributed generation system within a smart city.