957 resultados para Tanks-in-series Model
Resumo:
Reactive oxygen intermediates (ROI) contribute to neuronal injury in cerebral ischemia and trauma. In this study we explored the role of ROI in bacterial meningitis. Meningitis caused by group B streptococci in infant rats led to two distinct forms of neuronal injury, areas of necrosis in the cortex and neuronal loss in the dentate gyrus of the hippocampus, the latter showing evidence for apoptosis. Staining of brain sections with diaminobenzidine after perfusion with manganese buffer and measurement of lipid peroxidation products in brain homogenates both provided evidence that meningitis led to the generation of ROI. Treatment with the radical scavenger alpha-phenyl-tert-butyl nitrone (PBN) (100 mg/kg q8h i.p.) beginning at the time of infection completely abolished ROI detection and the increase in lipidperoxidation. Cerebral cortical perfusion was reduced in animals with meningitis to 37.5+/-21.0% of uninfected controls (P < 0.05), and PBN restored cortical perfusion to 72.0+/-8.1% of controls (P < 0.05 vs meningitis). PBN also completely prevented neuronal injury in the cortex and hippocampus, when started at the time of infection (P < 0.02), and significantly reduced both forms of injury, when started 18 h after infection together with antibiotics (P < 0.004 for cortex and P < 0.001 for hippocampus). These data indicate that the generation of ROI is a major contributor to cerebral ischemia and necrotic and apoptotic neuronal injury in this model of neonatal meningitis.
Resumo:
This study examines how different microphysical parameterization schemes influence orographically induced precipitation and the distributions of hydrometeors and water vapour for midlatitude summer conditions in the Weather Research and Forecasting (WRF) model. A high-resolution two-dimensional idealized simulation is used to assess the differences between the schemes in which a moist air flow is interacting with a bell-shaped 2 km high mountain. Periodic lateral boundary conditions are chosen to recirculate atmospheric water in the domain. It is found that the 13 selected microphysical schemes conserve the water in the model domain. The gain or loss of water is less than 0.81% over a simulation time interval of 61 days. The differences of the microphysical schemes in terms of the distributions of water vapour, hydrometeors and accumulated precipitation are presented and discussed. The Kessler scheme, the only scheme without ice-phase processes, shows final values of cloud liquid water 14 times greater than the other schemes. The differences among the other schemes are not as extreme, but still they differ up to 79% in water vapour, up to 10 times in hydrometeors and up to 64% in accumulated precipitation at the end of the simulation. The microphysical schemes also differ in the surface evaporation rate. The WRF single-moment 3-class scheme has the highest surface evaporation rate compensated by the highest precipitation rate. The different distributions of hydrometeors and water vapour of the microphysical schemes induce differences up to 49 W m−2 in the downwelling shortwave radiation and up to 33 W m−2 in the downwelling longwave radiation.
Resumo:
Air was sampled from the porous firn layer at the NEEM site in Northern Greenland. We use an ensemble of ten reference tracers of known atmospheric history to characterise the transport properties of the site. By analysing uncertainties in both data and the reference gas atmospheric histories, we can objectively assign weights to each of the gases used for the depth-diffusivity reconstruction. We define an objective root mean square criterion that is minimised in the model tuning procedure. Each tracer constrains the firn profile differently through its unique atmospheric history and free air diffusivity, making our multiple-tracer characterisation method a clear improvement over the commonly used single-tracer tuning. Six firn air transport models are tuned to the NEEM site; all models successfully reproduce the data within a 1σ Gaussian distribution. A comparison between two replicate boreholes drilled 64 m apart shows differences in measured mixing ratio profiles that exceed the experimental error. We find evidence that diffusivity does not vanish completely in the lock-in zone, as is commonly assumed. The ice age- gas age difference (1 age) at the firn-ice transition is calculated to be 182+3−9 yr. We further present the first intercomparison study of firn air models, where we introduce diagnostic scenarios designed to probe specific aspects of the model physics. Our results show that there are major differences in the way the models handle advective transport. Furthermore, diffusive fractionation of isotopes in the firn is poorly constrained by the models, which has consequences for attempts to reconstruct the isotopic composition of trace gases back in time using firn air and ice core records.
Resumo:
AIMS To determine efficacy of a minimally invasive (MI) surgical approach using a human MI lumbar retractor for canine lumbosacral dorsal laminectomy and partial discectomy and to compare this technique to the standard open surgical (OS) approach. METHODS Lumbosacral dorsal laminectomy and partial discectomy was performed on 16 large-breed canine cadavers using either a standard OS (n=8) or MI (n=8) approach. Skin and fascial incision length, procedure time, and intraoperative complications were recorded. Postoperatively specimens were evaluated for laminectomy and discectomy dimensions, and visible damage to the cauda equina and exiting nerve roots. RESULTS Median length of skin and fascial incisions in the OS group were longer than in the MI group (p<0.001). Median laminectomy length was similar between both approaches (p=0.234) but width was greater for the MI than OS approach (p=0.002). Both approaches achieved similar partial discectomy width (p=0.279). Overall surgical time was longer for MI approaches compared to OS, with a median of 18.5 (min 15.5, max 21.8) minutes for MI compared to 14.6 (min 13.1, max 16.9) minutes for OS (p=0.001). CONCLUSIONS The MI approach reduced incision lengths while retaining comparable laminectomy and discectomy dimensions. For this in vitro model the MI approach required more time to complete, but this difference may not be relevant in clinical cases. CLINICAL RELEVANCE Dogs undergoing lumbosacral dorsal laminectomy are commonly large-breed dogs. The traditional open approach requires a large skin incision and soft tissue dissection, especially in overweight animals. A MI approach accomplishing the same surgical result while minimising soft tissue trauma could reduce post-operative pain and recovery time, and may lower wound-related complications. Clinical studies are needed to confirm postoperative benefit and assess operating times in vivo.
Resumo:
OBJECTIVE: New routes for cell transplantation into the brain need to be explored as intracerebral or intrathecal applications have a high risk to cause damage to the central nervous system. It has been hypothesized that transnasally administrated cells bypass the blood-brain barrier and migrate along the olfactory neural route into the brain and cerebrospinal fluid. Our goal is to confirm this hypothesis by transnasally administrating Wharton’s Jelly mesenchymal stem cells (WJ-MSC) and neural progenitor cells (NPC) to perinatal rats in a model of hypoxic-ischemic brain injury. STUDY DESIGN: Four-day-old Wistar rat pups, previously brain-damaged by combined hypoxic-ischemic and inflammatory insult, either received WJ-MSC or green fluorescent protein-expressing NPC: The heads of the rat pups were immobilized and 3 ml drops containing the cells (50’000 cells/ml) were placed on one nostril allowing it to be snorted. This procedure was repeated twice, alternating right to left nostril with an interval of one minute between administrations. The rat pups received a total of 600’000 cells. Animals were sacrificed 24h, 48h or 7 days after the application of the cells. Fixed brains were collected, embedded in paraffin and sectioned. RESULTS: Transplanted cells were found in the layers of the olfactory bulb (OB), the cerebral cortex, thalamus and the hippocampus. The amount of cells was highest in the OB. Animals treated with transnasally delivered stem cells showed significantly decreased gliosis compared to untreated animals. CONCLUSION: Our data show that transnasal delivery of WJ-MSC and NPC to the newborn brain after perinatal brain damage is successful. The cells not only migrate the brain, but also decrease scar formation and improve neurogenesis. Therefore, the non-invasive intranasal delivery of stem cells to the brain may be the preferred method for stem cell treatment of perinatal brain damage and should be preferred in future clinical trials.
Resumo:
BACKGROUND Cam-type femoroacetabular impingement (FAI) resulting from an abnormal nonspherical femoral head shape leads to chondrolabral damage and is considered a cause of early osteoarthritis. A previously developed experimental ovine FAI model induces a cam-type impingement that results in localized chondrolabral damage, replicating the patterns found in the human hip. Biochemical MRI modalities such as T2 and T2* may allow for evaluation of the cartilage biochemistry long before cartilage loss occurs and, for that reason, may be a worthwhile avenue of inquiry. QUESTIONS/PURPOSES We asked: (1) Does the histological grading of degenerated cartilage correlate with T2 or T2* values in this ovine FAI model? (2) How accurately can zones of degenerated cartilage be predicted with T2 or T2* MRI in this model? METHODS A cam-type FAI was induced in eight Swiss alpine sheep by performing a closing wedge intertrochanteric varus osteotomy. After ambulation of 10 to 14 weeks, the sheep were euthanized and a 3-T MRI of the hip was performed. T2 and T2* values were measured at six locations on the acetabulum and compared with the histological damage pattern using the Mankin score. This is an established histological scoring system to quantify cartilage degeneration. Both T2 and T2* values are determined by cartilage water content and its collagen fiber network. Of those, the T2* mapping is a more modern sequence with technical advantages (eg, shorter acquisition time). Correlation of the Mankin score and the T2 and T2* values, respectively, was evaluated using the Spearman's rank correlation coefficient. We used a hierarchical cluster analysis to calculate the positive and negative predictive values of T2 and T2* to predict advanced cartilage degeneration (Mankin ≥ 3). RESULTS We found a negative correlation between the Mankin score and both the T2 (p < 0.001, r = -0.79) and T2* values (p < 0.001, r = -0.90). For the T2 MRI technique, we found a positive predictive value of 100% (95% confidence interval [CI], 79%-100%) and a negative predictive value of 84% (95% CI, 67%-95%). For the T2* technique, we found a positive predictive value of 100% (95% CI, 79%-100%) and a negative predictive value of 94% (95% CI, 79%-99%). CONCLUSIONS T2 and T2* MRI modalities can reliably detect early cartilage degeneration in the experimental ovine FAI model. CLINICAL RELEVANCE T2 and T2* MRI modalities have the potential to allow for monitoring the natural course of osteoarthrosis noninvasively and to evaluate the results of surgical treatments targeted to joint preservation.
Resumo:
Pregnant BALB/c mice have been widely used as an in vivo model to study Neospora caninum infection biology and to provide proof-of-concept for assessments of drugs and vaccines against neosporosis. The fact that this model has been used with different isolates of variable virulence, varying infection routes and differing methods to prepare the parasites for infection, has rendered the comparison of results from different laboratories impossible. In most studies, mice were infected with similar number of parasites (2 × 10(6)) as employed in ruminant models (10(7) for cows and 10(6) for sheep), which seems inappropriate considering the enormous differences in the weight of these species. Thus, for achieving meaningful results in vaccination and drug efficacy experiments, a refinement and standardization of this experimental model is necessary. Thus, 2 × 10(6), 10(5), 10(4), 10(3) and 10(2) tachyzoites of the highly virulent and well-characterised Nc-Spain7 isolate were subcutaneously inoculated into mice at day 7 of pregnancy, and clinical outcome, vertical transmission, parasite burden and antibody responses were compared. Dams from all infected groups presented nervous signs and the percentage of surviving pups at day 30 postpartum was surprisingly low (24%) in mice infected with only 10(2) tachyzoites. Importantly, infection with 10(5) tachyzoites resulted in antibody levels, cerebral parasite burden in dams and 100% mortality rate in pups, which was identical to infection with 2 × 10(6) tachyzoites. Considering these results, it is reasonable to lower the challenge dose to 10(5) tachyzoites in further experiments when assessing drugs or vaccine candidates.
Resumo:
Lung cancer is a devastating disease with very poor prognosis. The design of better treatments for patients would be greatly aided by mouse models that closely resemble the human disease. The most common type of human lung cancer is adenocarcinoma with frequent metastasis. Unfortunately, current models for this tumor are inadequate due to the absence of metastasis. Based on the molecular findings in human lung cancer and metastatic potential of osteosarcomas in mutant p53 mouse models, I hypothesized that mice with both K-ras and p53 missense mutations might develop metastatic lung adenocarcinomas. Therefore, I incorporated both K-rasLA1 and p53RI72HΔg alleles into mouse lung cells to establish a more faithful model for human lung adenocarcinoma and for translational and mechanistic studies. Mice with both mutations ( K-rasLA1/+ p53R172HΔg/+) developed advanced lung adenocarcinomas with similar histopathology to human tumors. These lung adenocarcinomas were highly aggressive and metastasized to multiple intrathoracic and extrathoracic sites in a pattern similar to that seen in lung cancer patients. This mouse model also showed gender differences in cancer related death and developed pleural mesotheliomas in 23.2% of them. In a preclinical study, the new drug Erlotinib (Tarceva) decreased the number and size of lung lesions in this model. These data demonstrate that this mouse model most closely mimics human metastatic lung adenocarcinoma and provides an invaluable system for translational studies. ^ To screen for important genes for metastasis, gene expression profiles of primary lung adenocarcinomas and metastases were analyzed. Microarray data showed that these two groups were segregated in gene expression and had 79 highly differentially expressed genes (more than 2.5 fold changes and p<0.001). Microarray data of Bub1b, Vimentin and CCAM1 were validated in tumors by quantitative real-time PCR (QPCR). Bub1b , a mitotic checkpoint gene, was overexpressed in metastases and this correlated with more chromosomal abnormalities in metastatic cells. Vimentin, a marker of epithelial-mesenchymal transition (EMT), was also highly expressed in metastases. Interestingly, Twist, a key EMT inducer, was also highly upregulated in metastases by QPCR, and this significantly correlated with the overexpression of Vimentin in the same tumors. These data suggest EMT occurs in lung adenocarcinomas and is a key mechanism for the development of metastasis in K-ras LA1/+ p53R172HΔg/+ mice. Thus, this mouse model provides a unique system to further probe the molecular basis of metastatic lung cancer.^
High-resolution microarray analysis of chromosome 20q in human colon cancer metastasis model systems
Resumo:
Amplification of human chromosome 20q DNA is the most frequently occurring chromosomal abnormality detected in sporadic colorectal carcinomas and shows significant correlation with liver metastases. Through comprehensive high-resolution microarray comparative genomic hybridization and microarray gene expression profiling, we have characterized chromosome 20q amplicon genes associated with human colorectal cancer metastasis in two in vitro metastasis model systems. The results revealed increasing complexity of the 20q genomic profile from the primary tumor-derived cell lines to the lymph node and liver metastasis derived cell lines. Expression analysis of chromosome 20q revealed a subset of over expressed genes residing within the regions of genomic copy number gain in all the tumor cell lines, suggesting these are Chromosome 20q copy number responsive genes. Bases on their preferential expression levels in the model system cell lines and known biological function, four of the over expressed genes mapping to the common intervals of genomic copy gain were considered the most promising candidate colorectal metastasis-associated genes. Validation of genomic copy number and expression array data was carried out on these genes, with one gene, DNMT3B, standing out as expressed at a relatively higher levels in the metastasis-derived cell lines compared with their primary-derived counterparts in both the models systems analyzed. The data provide evidence for the role of chromosome 20q genes with low copy gain and elevated expression in the clonal evolution of metastatic cells and suggests that such genes may serve as early biomarkers of metastatic potential. The data also support the utility of the combined microarray comparative genomic hybridization and expression array analysis for identifying copy number responsive genes in areas of low DNA copy gain in cancer cells. ^
Resumo:
Pulmonary fibrosis is a devastating and lethal lung disease with no current cure. Research into cellular signaling pathways able to modulate aspects of pulmonary inflammation and fibrosis will aid in the development of effective therapies for its treatment. Our laboratory has generated a transgenic/knockout mouse with systemic elevations in adenosine due to the partial lack of its metabolic enzyme, adenosine deaminase (ADA). These mice spontaneously develop progressive lung inflammation and severe pulmonary fibrosis suggesting that aberrant adenosine signaling is influencing the development and/or progression of the disease in these animals. These mice also show marked increases in the pro-fibrotic mediator, osteopontin (OPN), which are reversed through ADA therapy that serves to lower lung adenosine levels and ameliorate aspects of the disease. OPN is known to be regulated by intracellular signaling pathways that can be accessed through adenosine receptors, particularly the low affinity A2BR receptor, suggesting that adenosine receptor signaling may be responsible for the induction of OPN in our model. In-vitro, adenosine and the broad spectrum adenosine receptor agonist, NECA, were able to induce a 2.5-fold increase in OPN transcripts in primary alveolar macrophages. This induction was blocked through antagonism of the A2BR receptor pharmacologically, and through the deletion of the receptor subtype in these cells genetically, supporting the hypothesis that the A2BR receptor was responsible for the induction of OPN in our model. These findings demonstrate for the first time that adenosine signaling is an important modulator of pulmonary fibrosis in ADA-deficient mice and that this is in part due to signaling through the A2BR receptor which leads to the induction of the pro-fibrotic molecule, otseopontin. ^
Resumo:
Interaction effect is an important scientific interest for many areas of research. Common approach for investigating the interaction effect of two continuous covariates on a response variable is through a cross-product term in multiple linear regression. In epidemiological studies, the two-way analysis of variance (ANOVA) type of method has also been utilized to examine the interaction effect by replacing the continuous covariates with their discretized levels. However, the implications of model assumptions of either approach have not been examined and the statistical validation has only focused on the general method, not specifically for the interaction effect.^ In this dissertation, we investigated the validity of both approaches based on the mathematical assumptions for non-skewed data. We showed that linear regression may not be an appropriate model when the interaction effect exists because it implies a highly skewed distribution for the response variable. We also showed that the normality and constant variance assumptions required by ANOVA are not satisfied in the model where the continuous covariates are replaced with their discretized levels. Therefore, naïve application of ANOVA method may lead to an incorrect conclusion. ^ Given the problems identified above, we proposed a novel method modifying from the traditional ANOVA approach to rigorously evaluate the interaction effect. The analytical expression of the interaction effect was derived based on the conditional distribution of the response variable given the discretized continuous covariates. A testing procedure that combines the p-values from each level of the discretized covariates was developed to test the overall significance of the interaction effect. According to the simulation study, the proposed method is more powerful then the least squares regression and the ANOVA method in detecting the interaction effect when data comes from a trivariate normal distribution. The proposed method was applied to a dataset from the National Institute of Neurological Disorders and Stroke (NINDS) tissue plasminogen activator (t-PA) stroke trial, and baseline age-by-weight interaction effect was found significant in predicting the change from baseline in NIHSS at Month-3 among patients received t-PA therapy.^
Resumo:
The characteristics of a global set-up of the Finite-Element Sea-Ice Ocean Model under forcing of the period 1958-2004 are presented. The model set-up is designed to study the variability in the deep-water mass formation areas and was therefore regionally better resolved in the deep-water formation areas in the Labrador Sea, Greenland Sea, Weddell Sea and Ross Sea. The sea-ice model reproduces realistic sea-ice distributions and variabilities in the sea-ice extent of both hemispheres as well as sea-ice transport that compares well with observational data. Based on a comparison between model and ocean weather ship data in the North Atlantic, we observe that the vertical structure is well captured in areas with a high resolution. In our model set-up, we are able to simulate decadal ocean variability including several salinity anomaly events and corresponding fingerprint in the vertical hydrography. The ocean state of the model set-up features pronounced variability in the Atlantic Meridional Overturning Circulation as well as the associated mixed layer depth pattern in the North Atlantic deep-water formation areas.
Resumo:
We measured the relationship between CO2-induced seawater acidification, photo-physiological performance and intracellular pH (pHi) in a model cnidarian-dinoflagellate symbiosis - the sea anemone Aiptasia sp. -under ambient (289.94 ± 12.54 µatm), intermediate (687.40 ± 25.10 µatm) and high (1459.92 ± 65.51 µatm) CO2 conditions. These treatments represented current CO2 levels, in addition to CO2 stabilisation scenarios IV and VI provided by the Intergovernmental Panel on Climate Change (IPCC). Anemones were exposed to each treatment for two months and sampled at regular intervals. At each time-point we measured a series of physiological responses: maximum dark-adapted fluorescent yield of PSII (Fv/Fm), gross photosynthetic rate, respiration rate, symbiont population density, and light-adapted pHi of both the dinoflagellate symbiont and isolated host anemone cell. We observed increases in all but one photo-physiological parameter (Pgross:R ratio). At the cellular level, increases in light-adapted symbiont pHi were observed under both intermediate and high CO2 treatments, relative to control conditions (pHi 7.35 and 7.46 versus pHi 7.25, respectively). The response of light-adapted host pHi was more complex, however, with no change observed under the intermediate CO2 treatment, but a 0.3 pH-unit increase under the high CO2 treatment (pHi 7.19 and 7.48, respectively). This difference is likely a result of a disproportionate increase in photosynthesis relative to respiration at the higher CO2 concentration. Our results suggest that, rather than causing cellular acidosis, the addition of CO2 will enhance photosynthetic performance, enabling both the symbiont and host cell to withstand predicted ocean acidification scenarios.
Resumo:
This paper proposes an interleaved multiphase buck converter with minimum time control strategy for envelope amplifiers in high efficiency RF power amplifiers. The solution of the envelope amplifier is to combine the proposed converter with a linear regulator in series. High system efficiency can be obtained through modulating the supply voltage of the envelope amplifier with the fast output voltage variation of the converter working with several particular duty cycles that achieve total ripple cancellation. The transient model for minimum time control is explained, and the calculation of transient times that are pre-calculated and inserted into a look-up table is presented. The filter design trade-off that limits capability of envelope modulation is also discussed. The experimental results verify the fast voltage transient obtained with a 4-phase buck prototype.
Resumo:
El Análisis de Consumo de Recursos o Análisis de Coste trata de aproximar el coste de ejecutar un programa como una función dependiente de sus datos de entrada. A pesar de que existen trabajos previos a esta tesis doctoral que desarrollan potentes marcos para el análisis de coste de programas orientados a objetos, algunos aspectos avanzados, como la eficiencia, la precisión y la fiabilidad de los resultados, todavía deben ser estudiados en profundidad. Esta tesis aborda estos aspectos desde cuatro perspectivas diferentes: (1) Las estructuras de datos compartidas en la memoria del programa son una pesadilla para el análisis estático de programas. Trabajos recientes proponen una serie de condiciones de localidad para poder mantener de forma consistente información sobre los atributos de los objetos almacenados en memoria compartida, reemplazando éstos por variables locales no almacenadas en la memoria compartida. En esta tesis presentamos dos extensiones a estos trabajos: la primera es considerar, no sólo los accesos a los atributos, sino también los accesos a los elementos almacenados en arrays; la segunda se centra en los casos en los que las condiciones de localidad no se cumplen de forma incondicional, para lo cual, proponemos una técnica para encontrar las precondiciones necesarias para garantizar la consistencia de la información acerca de los datos almacenados en memoria. (2) El objetivo del análisis incremental es, dado un programa, los resultados de su análisis y una serie de cambios sobre el programa, obtener los nuevos resultados del análisis de la forma más eficiente posible, evitando reanalizar aquellos fragmentos de código que no se hayan visto afectados por los cambios. Los analizadores actuales todavía leen y analizan el programa completo de forma no incremental. Esta tesis presenta un análisis de coste incremental, que, dado un cambio en el programa, reconstruye la información sobre el coste del programa de todos los métodos afectados por el cambio de forma incremental. Para esto, proponemos (i) un algoritmo multi-dominio y de punto fijo que puede ser utilizado en todos los análisis globales necesarios para inferir el coste, y (ii) una novedosa forma de almacenar las expresiones de coste que nos permite reconstruir de forma incremental únicamente las funciones de coste de aquellos componentes afectados por el cambio. (3) Las garantías de coste obtenidas de forma automática por herramientas de análisis estático no son consideradas totalmente fiables salvo que la implementación de la herramienta o los resultados obtenidos sean verificados formalmente. Llevar a cabo el análisis de estas herramientas es una tarea titánica, ya que se trata de herramientas de gran tamaño y complejidad. En esta tesis nos centramos en el desarrollo de un marco formal para la verificación de las garantías de coste obtenidas por los analizadores en lugar de analizar las herramientas. Hemos implementado esta idea mediante la herramienta COSTA, un analizador de coste para programas Java y KeY, una herramienta de verificación de programas Java. De esta forma, COSTA genera las garantías de coste, mientras que KeY prueba la validez formal de los resultados obtenidos, generando de esta forma garantías de coste verificadas. (4) Hoy en día la concurrencia y los programas distribuidos son clave en el desarrollo de software. Los objetos concurrentes son un modelo de concurrencia asentado para el desarrollo de sistemas concurrentes. En este modelo, los objetos son las unidades de concurrencia y se comunican entre ellos mediante llamadas asíncronas a sus métodos. La distribución de las tareas sugiere que el análisis de coste debe inferir el coste de los diferentes componentes distribuidos por separado. En esta tesis proponemos un análisis de coste sensible a objetos que, utilizando los resultados obtenidos mediante un análisis de apunta-a, mantiene el coste de los diferentes componentes de forma independiente. Abstract Resource Analysis (a.k.a. Cost Analysis) tries to approximate the cost of executing programs as functions on their input data sizes and without actually having to execute the programs. While a powerful resource analysis framework on object-oriented programs existed before this thesis, advanced aspects to improve the efficiency, the accuracy and the reliability of the results of the analysis still need to be further investigated. This thesis tackles this need from the following four different perspectives. (1) Shared mutable data structures are the bane of formal reasoning and static analysis. Analyses which keep track of heap-allocated data are referred to as heap-sensitive. Recent work proposes locality conditions for soundly tracking field accesses by means of ghost non-heap allocated variables. In this thesis we present two extensions to this approach: the first extension is to consider arrays accesses (in addition to object fields), while the second extension focuses on handling cases for which the locality conditions cannot be proven unconditionally by finding aliasing preconditions under which tracking such heap locations is feasible. (2) The aim of incremental analysis is, given a program, its analysis results and a series of changes to the program, to obtain the new analysis results as efficiently as possible and, ideally, without having to (re-)analyze fragments of code that are not affected by the changes. During software development, programs are permanently modified but most analyzers still read and analyze the entire program at once in a non-incremental way. This thesis presents an incremental resource usage analysis which, after a change in the program is made, is able to reconstruct the upper-bounds of all affected methods in an incremental way. To this purpose, we propose (i) a multi-domain incremental fixed-point algorithm which can be used by all global analyses required to infer the cost, and (ii) a novel form of cost summaries that allows us to incrementally reconstruct only those components of cost functions affected by the change. (3) Resource guarantees that are automatically inferred by static analysis tools are generally not considered completely trustworthy, unless the tool implementation or the results are formally verified. Performing full-blown verification of such tools is a daunting task, since they are large and complex. In this thesis we focus on the development of a formal framework for the verification of the resource guarantees obtained by the analyzers, instead of verifying the tools. We have implemented this idea using COSTA, a state-of-the-art cost analyzer for Java programs and KeY, a state-of-the-art verification tool for Java source code. COSTA is able to derive upper-bounds of Java programs while KeY proves the validity of these bounds and provides a certificate. The main contribution of our work is to show that the proposed tools cooperation can be used for automatically producing verified resource guarantees. (4) Distribution and concurrency are today mainstream. Concurrent objects form a well established model for distributed concurrent systems. In this model, objects are the concurrency units that communicate via asynchronous method calls. Distribution suggests that analysis must infer the cost of the diverse distributed components separately. In this thesis we propose a novel object-sensitive cost analysis which, by using the results gathered by a points-to analysis, can keep the cost of the diverse distributed components separate.