898 resultados para Exponential Random Graph Model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Laser micromachining is an important material processing technique used in industry and medicine to produce parts with high precision. Control of the material removal process is imperative to obtain the desired part with minimal thermal damage to the surrounding material. Longer pulsed lasers, with pulse durations of milli- and microseconds, are used primarily for laser through-cutting and welding. In this work, a two-pulse sequence using microsecond pulse durations is demonstrated to achieve consistent material removal during percussion drilling when the delay between the pulses is properly defined. The light-matter interaction moves from a regime of surface morphology changes to melt and vapour ejection. Inline coherent imaging (ICI), a broadband, spatially-coherent imaging technique, is used to monitor the ablation process. The pulse parameter space is explored and the key regimes are determined. Material removal is observed when the pulse delay is on the order of the pulse duration. ICI is also used to directly observe the ablation process. Melt dynamics are characterized by monitoring surface changes during and after laser processing at several positions in and around the interaction region. Ablation is enhanced when the melt has time to flow back into the hole before the interaction with the second pulse begins. A phenomenological model is developed to understand the relationship between material removal and pulse delay. Based on melt refilling the interaction region, described by logistic growth, and heat loss, described by exponential decay, the model is fit to several datasets. The fit parameters reflect the pulse energies and durations used in the ablation experiments. For pulse durations of 50 us with pulse energies of 7.32 mJ +/- 0.09 mJ, the logisitic growth component of the model reaches half maximum after 8.3 us +/- 1.1 us and the exponential decays with a rate of 64 us +/- 15 us. The phenomenological model offers an interpretation of the material removal process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aberrant behavior of biological signaling pathways has been implicated in diseases such as cancers. Therapies have been developed to target proteins in these networks in the hope of curing the illness or bringing about remission. However, identifying targets for drug inhibition that exhibit good therapeutic index has proven to be challenging since signaling pathways have a large number of components and many interconnections such as feedback, crosstalk, and divergence. Unfortunately, some characteristics of these pathways such as redundancy, feedback, and drug resistance reduce the efficacy of single drug target therapy and necessitate the employment of more than one drug to target multiple nodes in the system. However, choosing multiple targets with high therapeutic index poses more challenges since the combinatorial search space could be huge. To cope with the complexity of these systems, computational tools such as ordinary differential equations have been used to successfully model some of these pathways. Regrettably, for building these models, experimentally-measured initial concentrations of the components and rates of reactions are needed which are difficult to obtain, and in very large networks, they may not be available at the moment. Fortunately, there exist other modeling tools, though not as powerful as ordinary differential equations, which do not need the rates and initial conditions to model signaling pathways. Petri net and graph theory are among these tools. In this thesis, we introduce a methodology based on Petri net siphon analysis and graph network centrality measures for identifying prospective targets for single and multiple drug therapies. In this methodology, first, potential targets are identified in the Petri net model of a signaling pathway using siphon analysis. Then, the graph-theoretic centrality measures are employed to prioritize the candidate targets. Also, an algorithm is developed to check whether the candidate targets are able to disable the intended outputs in the graph model of the system or not. We implement structural and dynamical models of ErbB1-Ras-MAPK pathways and use them to assess and evaluate this methodology. The identified drug-targets, single and multiple, correspond to clinically relevant drugs. Overall, the results suggest that this methodology, using siphons and centrality measures, shows promise in identifying and ranking drugs. Since this methodology only uses the structural information of the signaling pathways and does not need initial conditions and dynamical rates, it can be utilized in larger networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

SYSTEMATIC REVIEW AND META-ANALYSIS: EFFECTS OF WALKING EXERCISE IN CHRONIC MUSCULOSKELETAL PAIN O'Connor S.R.1, Tully M.A.2, Ryan B.3, Baxter D.G.3, Bradley J.M.1, McDonough S.M.11University of Ulster, Health & Rehabilitation Sciences Research Institute, Newtownabbey, United Kingdom, 2Queen's University, UKCRC Centre of Excellence for Public Health (NI), Belfast, United Kingdom, 3University of Otago, Centre for Physiotherapy Research, Dunedin, New ZealandPurpose: To examine the effects of walking exercise on pain and self-reported function in adults with chronic musculoskeletal pain.Relevance: Chronic musculoskeletal pain is a major cause of morbidity, exerting a substantial influence on long-term health status and overall quality of life. Current treatment recommendations advocate various aerobic exercise interventions for such conditions. Walking may represent an ideal form of exercise due to its relatively low impact. However, there is currently limited evidence for its effectiveness.Participants: Not applicable.Methods: A comprehensive search strategy was undertaken by two independent reviewers according to the preferred reporting items for systematic reviews and meta-analyses (PRISMA) and the recommendations of the Cochrane Musculoskeletal Review Group. Six electronic databases (Medline, CINAHL, PsychINFO, PEDro, Sport DISCUS and the Cochrane Central Register of Controlled Trials) were searched for relevant papers published up to January 2010 using MeSH terms. All randomised or non-randomised studies published in full were considered for inclusion. Studies were required to include adults aged 18 years or over with a diagnosis of chronic low back pain, osteoarthritis or fibromyalgia. Studies were excluded if they involved peri-operative or post-operative interventions or did not include a comparative, non exercise or non-walking exercise control group. The U.S. Preventative Services Task Force system was used to assess methodological quality. Data for pain and self-reported function were extracted and converted to a score out of 100.Analysis: Data were pooled and analyzed using RevMan (v.5.0.24). Statistical heterogeneity was assessed using the X2 and I2 test statistics. A random effects model was used to calculate the mean differences and 95% CIs. Data were analyzed by length of final follow-up which was categorized as short (≤8 weeks post randomisation), mid (2-12 months) or long-term (>12 months).Results: A total of 4324 articles were identified and twenty studies (1852 participants) meeting the inclusion criteria were included in the review. Overall, studies were judged to be of at least fair methodological quality. The most common sources of likely bias were identified as lack of concealed allocation and failure to adequately address incomplete data. Data from 12 studies were suitable for meta-analysis. Walking led to reductions in pain at short (<8 weeks post randomisation) (-8.44 [-14.54, -2.33]) and mid-term (>8 weeks - 12 month) follow-up (-9.28 [-16.34, -2.22]). No effect was observed for long-term (>12 month) data (-2.49 [-7.62, 2.65]). For function, between group differences were observed for short (-11.57 [-16.06, -7.08]) and mid-term data (-13.26 [-16.91, -9.62]). A smaller effect was also observed at long-term follow-up (-5.60 [-7.70, -3.50]).Conclusions: Walking interventions were associated with statistically significant improvements in pain and function at short and mid-term follow-up. Long-term data were limited but indicated that these effects do not appear to be maintained beyond twelve months.Implications: Walking may be an effective form of exercise for individuals with chronic musculoskeletal pain. However, further research is required which examines longer term follow-up and dose-response issues in this population.Key-words: 1. Walking exercise 2. Musculoskeletal pain 3. Systematic reviewFunding acknowledgements: Department of Employment and Learning, Northern Ireland.Ethics approval: Not applicable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background

It is unknown whether a conservative approach to fluid administration or deresuscitation (active removal of fluid using diuretics or renal replacement therapy) is beneficial following haemodynamic stabilisation of critically ill patients.

Purpose

To evaluate the efficacy and safety of conservative or deresuscitative fluid strategies in adults and children with acute respiratory distress syndrome (ARDS), sepsis or systemic inflammatory response syndrome (SIRS) in the post-resuscitation phase of critical illness.

Methods

We searched Medline, EMBASE and the Cochrane central register of controlled trials from 1980 to June 2016, and manually reviewed relevant conference proceedings from 2009 to the present. Two reviewers independently assessed search results for inclusion and undertook data extraction and quality appraisal. We included randomised trials comparing fluid regimens with differing fluid balances between groups, and observational studies investigating the relationship between fluid balance and clinical outcomes.

Results

Forty-nine studies met the inclusion criteria. Marked clinical heterogeneity was evident. In a meta-analysis of 11 randomised trials (2051 patients) using a random-effects model, we found no significant difference in mortality with conservative or deresuscitative strategies compared with a liberal strategy or usual care [pooled risk ratio (RR) 0.92, 95 % confidence interval (CI) 0.82–1.02, I2 = 0 %]. A conservative or deresuscitative strategy resulted in increased ventilator-free days (mean difference 1.82 days, 95 % CI 0.53–3.10, I2 = 9 %) and reduced length of ICU stay (mean difference −1.88 days, 95 % CI −0.12 to −3.64, I2 = 75 %) compared with a liberal strategy or standard care.

Conclusions

In adults and children with ARDS, sepsis or SIRS, a conservative or deresuscitative fluid strategy results in an increased number of ventilator-free days and a decreased length of ICU stay compared with a liberal strategy or standard care. The effect on mortality remains uncertain. Large randomised trials are needed to determine optimal fluid strategies in critical illness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As part of its single technology appraisal (STA) process, the National Institute for Health and Care Excellence (NICE) invited the company that manufactures cabazitaxel (Jevtana(®), Sanofi, UK) to submit evidence for the clinical and cost effectiveness of cabazitaxel for treatment of patients with metastatic hormone-relapsed prostate cancer (mHRPC) previously treated with a docetaxel-containing regimen. The School of Health and Related Research Technology Appraisal Group at the University of Sheffield was commissioned to act as the independent Evidence Review Group (ERG). The ERG produced a critical review of the evidence for the clinical and cost effectiveness of the technology based upon the company's submission to NICE. Clinical evidence for cabazitaxel was derived from a multinational randomised open-label phase III trial (TROPIC) of cabazitaxel plus prednisone or prednisolone compared with mitoxantrone plus prednisone or prednisolone, which was assumed to represent best supportive care. The NICE final scope identified a further three comparators: abiraterone in combination with prednisone or prednisolone; enzalutamide; and radium-223 dichloride for the subgroup of people with bone metastasis only (no visceral metastasis). The company did not consider radium-223 dichloride to be a relevant comparator. Neither abiraterone nor enzalutamide has been directly compared in a trial with cabazitaxel. Instead, clinical evidence was synthesised within a network meta-analysis (NMA). Results from TROPIC showed that cabazitaxel was associated with a statistically significant improvement in both overall survival and progression-free survival compared with mitoxantrone. Results from a random-effects NMA, as conducted by the company and updated by the ERG, indicated that there was no statistically significant difference between the three active treatments for both overall survival and progression-free survival. Utility data were not collected as part of the TROPIC trial, and were instead taken from the company's UK early access programme. Evidence on resource use came from the TROPIC trial, supplemented by both expert clinical opinion and a UK clinical audit. List prices were used for mitoxantrone, abiraterone and enzalutamide as directed by NICE, although commercial in-confidence patient-access schemes (PASs) are in place for abiraterone and enzalutamide. The confidential PAS was used for cabazitaxel. Sequential use of the advanced hormonal therapies (abiraterone and enzalutamide) does not usually occur in clinical practice in the UK. Hence, cabazitaxel could be used within two pathways of care: either when an advanced hormonal therapy was used pre-docetaxel, or when one was used post-docetaxel. The company believed that the former pathway was more likely to represent standard National Health Service (NHS) practice, and so their main comparison was between cabazitaxel and mitoxantrone, with effectiveness data from the TROPIC trial. Results of the company's updated cost-effectiveness analysis estimated a probabilistic incremental cost-effectiveness ratio (ICER) of £45,982 per quality-adjusted life-year (QALY) gained, which the committee considered to be the most plausible value for this comparison. Cabazitaxel was estimated to be both cheaper and more effective than abiraterone. Cabazitaxel was estimated to be cheaper but less effective than enzalutamide, resulting in an ICER of £212,038 per QALY gained for enzalutamide compared with cabazitaxel. The ERG noted that radium-223 is a valid comparator (for the indicated sub-group), and that it may be used in either of the two care pathways. Hence, its exclusion leads to uncertainty in the cost-effectiveness results. In addition, the company assumed that there would be no drug wastage when cabazitaxel was used, with cost-effectiveness results being sensitive to this assumption: modelling drug wastage increased the ICER comparing cabazitaxel with mitoxantrone to over £55,000 per QALY gained. The ERG updated the company's NMA and used a random effects model to perform a fully incremental analysis between cabazitaxel, abiraterone, enzalutamide and best supportive care using PASs for abiraterone and enzalutamide. Results showed that both cabazitaxel and abiraterone were extendedly dominated by the combination of best supportive care and enzalutamide. Preliminary guidance from the committee, which included wastage of cabazitaxel, did not recommend its use. In response, the company provided both a further discount to the confidential PAS for cabazitaxel and confirmation from NHS England that it is appropriate to supply and purchase cabazitaxel in pre-prepared intravenous-infusion bags, which would remove the cost of drug wastage. As a result, the committee recommended use of cabazitaxel as a treatment option in people with an Eastern Cooperative Oncology Group performance status of 0 or 1 whose disease had progressed during or after treatment with at least 225 mg/m(2) of docetaxel, as long as it was provided at the discount agreed in the PAS and purchased in either pre-prepared intravenous-infusion bags or in vials at a reduced price to reflect the average per-patient drug wastage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction : Le diabète de type 2 est une maladie évolutive débilitante et souvent mortelle qui atteint de plus en plus de personnes dans le monde. Le traitement antidiabétique non-insulinique (TADNI) notamment le traitement antidiabétique oral (TADO) est le plus fréquemment utilisé chez les adultes atteints de cette maladie. Toutefois, plusieurs de ces personnes ne prennent pas leur TADO tel que prescrit posant ainsi la problématique d’une adhésion sous-optimale. Ceci entraîne des conséquences néfastes aussi bien pour les patients que pour la société dans laquelle ils vivent. Il serait donc pertinent d’identifier des pistes de solution à cette problématique. Objectifs : Trois objectifs de recherche ont été étudiés : 1) Explorer la capacité de la théorie du comportement planifié (TCP) à prédire l’adhésion future au TADNI chez les adultes atteints de diabète de type 2, 2) Évaluer l’efficacité globale des interventions visant à améliorer l’adhésion au TADO chez les adultes atteints de diabète de type 2 et étudier l’influence des techniques de changement de comportement sur cette efficacité globale, et 3) Évaluer l’efficacité globale de l’entretien motivationnel sur l’adhésion au traitement médicamenteux chez les adultes atteints de maladie chronique et étudier l’influence des caractéristiques de cette intervention sur son efficacité globale. Méthodes : Pour l’objectif 1 : Il s’agissait d’une enquête web, suivie d’une évaluation de l’adhésion au TADNI sur une période de 30 jours, chez des adultes atteints de diabète de type 2, membres de Diabète Québec. L’enquête consistait à la complétion d’un questionnaire auto-administré incluant les variables de la TCP (intention, contrôle comportemental perçu et attitude) ainsi que d’autres variables dites «externes». Les informations relatives au calcul de l’adhésion provenaient des dossiers de pharmacie des participants transmis via la plateforme ReMed. Une régression linéaire multivariée a été utilisée pour estimer la mesure d’association entre l’intention et l’adhésion future au TADNI ainsi que l’interaction entre l’adhésion passée et l’intention. Pour répondre aux objectifs 2 et 3, deux revues systématiques et méta-analyses ont été effectuées et rapportées selon les lignes directrices de PRISMA. Un modèle à effets aléatoires a été utilisé pour estimer l’efficacité globale (g d’Hedges) des interventions et son intervalle de confiance à 95 % (IC95%) dans chacune des revues. Nous avons également quantifié l’hétérogénéité (I2 d’Higgins) entre les études, et avons fait des analyses de sous-groupe et des analyses de sensibilité. Résultats : Objectif 1 : Il y avait une interaction statistiquement significative entre l’adhésion passée et l’intention (valeur-p= 0,03). L’intention n’était pas statistiquement associée à l’adhésion future au TADNI, mais son effet était plus fort chez les non-adhérents que chez les adhérents avant l’enquête web. En revanche, l’intention était principalement prédite par le contrôle comportemental perçu à la fois chez les adhérents [β= 0,90, IC95%= (0,80; 1,00)] et chez les non-adhérents passés [β= 0,76, IC95%= (0,56; 0,97)]. Objectif 2 : L’efficacité globale des interventions sur l’adhésion au TADO était de 0,21 [IC95%= (-0,05; 0,47); I2= 82 %]. L’efficacité globale des interventions dans lesquelles les intervenants aidaient les patients et/ou les cliniciens à être proactifs dans la gestion des effets indésirables était de 0,64 [IC95%= (0,31; 0,96); I2= 56 %]. Objectif 3 : L’efficacité globale des interventions (basées sur l’entretien motivationnel) sur l’adhésion au traitement médicamenteux était de 0,12 [IC95%= (0,05; 0,20); I2= 1 %. Les interventions basées uniquement sur l’entretien motivationnel [β= 0,18, IC95%= (0,00; 0,36)] et celles dans lesquelles les intervenants ont été coachés [β= 0,47, IC95%= (0,03; 0,90)] étaient les plus efficaces. Aussi, les interventions administrées en face-à-face étaient plus efficaces que celles administrées par téléphone [β= 0,27, IC95%=(0,04; 0,50)]. Conclusion : Il existe un écart entre l’intention et l’adhésion future au TADNI, qui est partiellement expliqué par le niveau d’adhésion passée. Toutefois, il n’y avait pas assez de puissance statistique pour démontrer une association statistiquement significative entre l’intention et l’adhésion future chez les non-adhérents passés. D’un autre côté, quelques solutions au problème de l’adhésion sous-optimale au TADO ont été identifiées. En effet, le fait d’aider les patients et/ou les cliniciens à être proactifs dans la gestion des effets indésirables contribue efficacement à l’amélioration de l’adhésion au TADO chez les adultes atteints de diabète de type 2. Aussi, les interventions basées sur l’entretien motivationnel améliorent efficacement l’adhésion au traitement médicamenteux chez les adultes atteints de maladie chronique. L’entretien motivationnel pourrait donc être utilisé comme un outil clinique pour soutenir les patients dans l’autogestion de leur TADO.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work aims to investigate the relationship between the entrepreneurship and the incidence of bureaucratic corruption in the states of Brazil and Federal District. The main hypothesis of this study is that the opening of a business in Brazilian states is negatively affected by the incidence of corruption. The theoretical reference is divided into Entrepreneurship and bureaucratic corruption, with an emphasis on materialistic perspective (objectivist) of entrepreneurship and the effects of bureaucratic corruption on entrepreneurial activity. By the regression method with panel data, we estimated the models with pooled data and fixed and random effects. To measure corruption, I used the General Index of Corruption for the Brazilian states (BOLL, 2010), and to represent entrepreneurship, firm entry per capita by state. Tests (Chow, Hausman and Breusch-Pagan) indicate that the random effects model is more appropriate, and the preliminary results indicate a positive impact of bureaucratic corruption on entrepreneurial activity, contradicting the hypothesis expected and found in previous articles to Brazil, and corroborating the proposition of Dreher and Gassebner (2011) that, in countries with high regulation, bureaucratic corruption can be grease in the wheels of entrepreneurship

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work aims to investigate the relationship between the entrepreneurship and the incidence of bureaucratic corruption in the states of Brazil and Federal District. The main hypothesis of this study is that the opening of a business in Brazilian states is negatively affected by the incidence of corruption. The theoretical reference is divided into Entrepreneurship and bureaucratic corruption, with an emphasis on materialistic perspective (objectivist) of entrepreneurship and the effects of bureaucratic corruption on entrepreneurial activity. By the regression method with panel data, we estimated the models with pooled data and fixed and random effects. To measure corruption, I used the General Index of Corruption for the Brazilian states (BOLL, 2010), and to represent entrepreneurship, firm entry per capita by state. Tests (Chow, Hausman and Breusch-Pagan) indicate that the random effects model is more appropriate, and the preliminary results indicate a positive impact of bureaucratic corruption on entrepreneurial activity, contradicting the hypothesis expected and found in previous articles to Brazil, and corroborating the proposition of Dreher and Gassebner (2011) that, in countries with high regulation, bureaucratic corruption can be grease in the wheels of entrepreneurship

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this dissertation, we apply mathematical programming techniques (i.e., integer programming and polyhedral combinatorics) to develop exact approaches for influence maximization on social networks. We study four combinatorial optimization problems that deal with maximizing influence at minimum cost over a social network. To our knowl- edge, all previous work to date involving influence maximization problems has focused on heuristics and approximation. We start with the following viral marketing problem that has attracted a significant amount of interest from the computer science literature. Given a social network, find a target set of customers to seed with a product. Then, a cascade will be caused by these initial adopters and other people start to adopt this product due to the influence they re- ceive from earlier adopters. The idea is to find the minimum cost that results in the entire network adopting the product. We first study a problem called the Weighted Target Set Selection (WTSS) Prob- lem. In the WTSS problem, the diffusion can take place over as many time periods as needed and a free product is given out to the individuals in the target set. Restricting the number of time periods that the diffusion takes place over to be one, we obtain a problem called the Positive Influence Dominating Set (PIDS) problem. Next, incorporating partial incentives, we consider a problem called the Least Cost Influence Problem (LCIP). The fourth problem studied is the One Time Period Least Cost Influence Problem (1TPLCIP) which is identical to the LCIP except that we restrict the number of time periods that the diffusion takes place over to be one. We apply a common research paradigm to each of these four problems. First, we work on special graphs: trees and cycles. Based on the insights we obtain from special graphs, we develop efficient methods for general graphs. On trees, first, we propose a polynomial time algorithm. More importantly, we present a tight and compact extended formulation. We also project the extended formulation onto the space of the natural vari- ables that gives the polytope on trees. Next, building upon the result for trees---we derive the polytope on cycles for the WTSS problem; as well as a polynomial time algorithm on cycles. This leads to our contribution on general graphs. For the WTSS problem and the LCIP, using the observation that the influence propagation network must be a directed acyclic graph (DAG), the strong formulation for trees can be embedded into a formulation on general graphs. We use this to design and implement a branch-and-cut approach for the WTSS problem and the LCIP. In our computational study, we are able to obtain high quality solutions for random graph instances with up to 10,000 nodes and 20,000 edges (40,000 arcs) within a reasonable amount of time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Departamento de Administração, Programa de Pós-graduação em Administração, 2016.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Preterm labor, which defines as live-birth delivery before 37 weeks of gestation is a main determinant of neonatal morbidity and mortality around the world. Objective: The aim of this study was to determine the prevalence of preterm labor in Iran by a meta-analysis study, to be as a final measure for policy makers in this field. Materials and Methods: In this meta-analysis, the databases of Thomson database (Web of Knowledge), PubMed/Medline, Science Direct, Scopus, Google Scholar, Iranmedex, Scientific Information Database (SID), Magiran, and Medlib were searched for articles in English and Persian language published between 1995 and 2014. Among the studies with regard to the inclusion and exclusion criteria, 14 studies (out of 1370 publications) were selected. Data were analyzed by using Stata software version 11. The heterogeneity of reported prevalence among studies was evaluated by the Chi-square based Q test and I2 statistics. Results: The results of Chi-square based on Q test and I2 statistics revealed severe heterogeneity (Q=2505.12, p-value < 0.001 and I2= 99.5%) and consequently, the random effect model was used for the meta-analysis. Based on the random effect model, the overall estimated prevalence of preterm in Iran was 9.2% (95% CI: 7.6 – 10.7). Conclusion: Present study summarized the results of previous studies and provided a comprehensive view about the preterm delivery in Iran. In order to achieve a more desirable level and its reduction in the coming years, identifying affecting factor and interventional and preventive actions seem necessary.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The papers included in this thesis deal with a few aspects of insurance economics that have seldom been dealt with in the applied literature. In the first paper I apply for the first time the tools of the economics of crime to study the determinants of frauds, using data on Italian provinces. The contributions to the literature are manifold: -The price of insuring has a positive correlation with the propensity to defraud -Social norms constraint fraudulent behavior, but their strength is curtailed in economic downturns -I apply a simple extension of the Random Coefficient model, which allows for the presence of time invariant covariates and asymmetries in the impact of the regressors. The second paper assesses how the evolution of macro prudential regulation of insurance companies has been reflected in their equity price. I employ a standard event study methodology, deriving the definition of the “control” and “treatment” groups from what is implied by the regulatory framework. The main results are: -Markets care about the evolution of the legislation. Their perception has shifted from a first positive assessment of a possible implicit “too big to fail” subsidy to a more negative one related to its cost in terms of stricter capital requirement -The size of this phenomenon is positively related to leverage, size and on the geographical location of the insurance companies The third paper introduces a novel methodology to forecast non-life insurance premiums and profitability as function of macroeconomic variables, using the simultaneous equation framework traditionally employed macroeconometric models and a simple theoretical model of insurance pricing to derive a long term relationship between premiums, claims expenses and short term rates. The model is shown to provide a better forecast of premiums and profitability compared with the single equation specifications commonly used in applied analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Anthropogenic activities and climatic processes heavily influence surface water resources by causing their progressive depletion, which in turn affects both societies and the environment. Therefore, there is an urgent need to understand the contribution of human and climatic dynamics on the variation of surface water availability. Here, this investigation is performed on the contiguous United States (CONUS) using remotely-sensed data. Three anthropogenic (i.e., urban area, population, and irrigation) and two climatic factors (i.e., precipitation and temperature) were selected as potential drivers of changes in surface water extent and the overlap between the increase or decrease in these drivers and the variation of surface water was examined. Most of the river basins experienced a surface water gain due to precipitation increase (eastern CONUS), and a reduction of irrigated land (western CONUS). River basins of the arid southwestern region and some river basins of the northeastern area encountered a surface water loss, essentially induced by population growth, along with a precipitation deficit and a general expansion of irrigated land. To further inspect the role of population growth and urbanization on surface water loss, the spatial interaction between human settlements and surface water depletion was examined by evaluating the frequency of surface water loss as a function of distance from urban areas. The decline of the observed frequency was successfully reproduced with an exponential distance-decay model, proving that surface water losses are more concentrated in the proximity of cities. Climatic conditions influenced this pattern, with more widely distributed losses in arid regions compared to temperate and continental areas. The results presented in this Thesis provide an improved understanding of the effects of anthropogenic and climatic dynamics on surface water availability, which could be integrated in the definition of sustainable strategies for urbanization, water management, and surface water restoration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Earthquake prediction is a complex task for scientists due to the rare occurrence of high-intensity earthquakes and their inaccessible depths. Despite this challenge, it is a priority to protect infrastructure, and populations living in areas of high seismic risk. Reliable forecasting requires comprehensive knowledge of seismic phenomena. In this thesis, the development, application, and comparison of both deterministic and probabilistic forecasting methods is shown. Regarding the deterministic approach, the implementation of an alarm-based method using the occurrence of strong (fore)shocks, widely felt by the population, as a precursor signal is described. This model is then applied for retrospective prediction of Italian earthquakes of magnitude M≥5.0,5.5,6.0, occurred in Italy from 1960 to 2020. Retrospective performance testing is carried out using tests and statistics specific to deterministic alarm-based models. Regarding probabilistic models, this thesis focuses mainly on the EEPAS and ETAS models. Although the EEPAS model has been previously applied and tested in some regions of the world, it has never been used for forecasting Italian earthquakes. In the thesis, the EEPAS model is used to retrospectively forecast Italian shallow earthquakes with a magnitude of M≥5.0 using new MATLAB software. The forecasting performance of the probabilistic models was compared to other models using CSEP binary tests. The EEPAS and ETAS models showed different characteristics for forecasting Italian earthquakes, with EEPAS performing better in the long-term and ETAS performing better in the short-term. The FORE model based on strong precursor quakes is compared to EEPAS and ETAS using an alarm-based deterministic approach. All models perform better than a random forecasting model, with ETAS and FORE models showing better performance. However, to fully evaluate forecasting performance, prospective tests should be conducted. The lack of objective tests for evaluating deterministic models and comparing them with probabilistic ones was a challenge faced during the study.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)