53 resultados para Sink nodes


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Eighty-five new cases of conjunctival melanoma (CM) were diagnosed in Finland between 1967 and 2000. The annual crude incidence of CM was 0.51 per million inhabitants. The average age-adjusted incidence of 0.54 doubled during the study period, analogous to the increase in the incidence of cutaneous malignant melanoma during this period, suggesting a possible role for ultraviolet radiation in its pathogenesis. Nonlimbal tumors were more likely than limbal ones to recur and they were associated with decreased survival. Increasing tumor thickness and recurrence of the primary tumor were other clinical factors related to death from CM. The histopathologic specimens of 85 patients with CM melanoma were studied for cell type, mitotic count, tumor-infiltrating lymphocytes and macrophages, mean vascular density, extravascular matrix loops and networks, and mean diameter of the ten largest nucleoli (MLN). The absence of epithelioid cells, increasing mitotic count and small MLN were associated with shorter time to recurrence according to the Cox univariate regression. None of the histopathologic variables was associated with mortality from CM. Four (5%) patients had a CM limited to the cornea without evidence of a tumor other than primary acquired melanosis of the conjunctiva. Because there are no melanocytes in the cornea, the origin of these melanomas most likely is the limbal conjunctiva. All four corneally displaced CM were limited to the epithelium, and none of the patients developed metastases. An anatomic sub-classification based on my patients and world literature was developed for corneally displaced CM. In 20 patients the metastatic pattern could be determined. Ten patients had initial systemic metastases detected, nine had initial regional metastases, and in one case the two types were detected simultaneously. The patients most likely to develop either type of initial metastases were those with nonlimbal conjunctival melanoma, those with a primary tumor more than 2 mm thick, and those with a recurrent conjunctival melanoma. Approximately two thirds of the patients had limbal CM, a location associated with good prognosis. One third, however, had a primary CM originating outside the limbus. In these patients the chance of developing local recurrences as well as systemic metastases was significantly higher than in patients with limbal CM. Each recurrence accompanies an increased risk of developing metastases, and recurrences contribute to death along with increasing tumor thickness and nonlimbal tumor location. In my data, an equal number of patients with initial locoregional and systemic metastasis existed. Patients with limbal primary tumors less than 2 mm in thickness rarely experienced metastases, unless the tumor recurred. Consequently, the patients most likely to benefit from sentinel lymph node biopsy are those who have nonlimbal tumors, CM that are over 2 mm thick, or recurrent CM. The histopathology of CM differs from that of uveal melanoma. Microvascular factors did not prove to be of prognostic importance, possibly due to the fact that CM at least as often disseminates first to the regional lymph nodes, unlike uveal melanoma that almost always disseminates hematogenously.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Head and neck squamous cell carcinoma (HNSCC) is the sixth most common cancer worldwide. Well-known risk factors include tobacco smoking and alcohol consumption. Overall survival has improved, but is still low especially in developing countries. One reason for this is the often advanced stage of the disease at the time of diagnosis, but also lack of reliable prognostic tools to enable individualized patient treatment to improve outcome. To date, the TNM classification still serves as the best disease evaluation criterion, although it does not take into account the molecular basis of the tumor. The need for surrogate molecular markers for more accurate disease prediction has increased research interests in this field. We investigated the prevalence, physical status, and viral load of human papillomavirus (HPV) in HNSCC to determine the impact of HPV on head and neck carcinogenesis. The prevalence and genotyping of HPV were assessed with an SPF10 PCR microtiter plate-based hybridization assay (DEIA), followed by a line probe-based genotyping assay. More than half of the patients had HPV DNA in their tumor specimens. Oncogenic HPV-16 was the most common type, and coinfections with other oncogenic and benign associated types also existed. HPV-16 viral load was unevenly distributed among different tumor sites; the tonsils harbored significantly greater amounts of virus than other sites. Episomal location of HPV-16 was associated with large tumors, and both integrated and mixed forms of viral DNA were detected. In this series, we could not show that the presence of HPV DNA correlated with survival. In addition, we investigated the prevalence and genotype of HPV in laryngeal carcinoma patients in a prospective Nordic multicenter study based on fresh-frozen laryngeal tumor samples to determine whether the tumors were HPV-associated. These patients were also examined and interviewed at diagnosis for known risk factors, such as tobacco smoking and alcohol consumption, and for several other habituations to elucidate their effects on patient survival. HPV analysis was performed with the same protocols as in the first study. Only 4% of the specimens harbored HPV DNA. Heavy drinking was associated with poor survival. Heavy drinking patients were also younger than nonheavy drinkers and had a more advanced stage of disease at diagnosis. Heavy drinkers had worse oral hygiene than nonheavy drinkers; however, poor oral hygiene did not have prognostic significance. History of chronic laryngitis, gastroesophageal reflux disease, and orogenital sex contacts were rare in this series. To clarify why vocal cord carcinomas seldom metastasize, we determined tumor lymph vessel (LVD) and blood vessel (BVD) densities in HNSCC patients. We used a novel lymphatic vessel endothelial marker (LYVE-1 antibody) to locate the lymphatic vessels in HNSCC samples and CD31 to detect the blood microvessels. We found carcinomas of the vocal cords to harbor less lymphatic and blood microvessels than carcinomas arising from sites other than vocal cords. The lymphatic and blood microvessel densities did not correlate with tumor size. High BVD was strongly correlated with high LVD. Neither BVD nor LVD showed any association with survival in our series. The immune system plays an important role in tumorigenesis, as neoplastic cells have to escape the cytotoxic lymphocytes in order to survive. Several candidate HLA class II alleles have been reported to be prognostic in cervical carcinomas, an epithelial malignancy resembling HNSCC. These alleles may have an impact on head and neck carcinomas as well. We determined HLA-DRB1* and -DQB1* alleles in HNSCC patients. Healthy organ donors served as controls. The Inno-LiPA reverse dot-blot kit was used to identify alleles in patient samples. No single haplotype was found to be predictive of either the risk for head and neck cancer, or the clinical course of the disease. However, alleles observed to be prognostic in cervical carcinomas showed a similar tendency in our series. DRB1*03 was associated with node-negative disease at diagnosis. DRB1*08 and DRB1*13 were associated with early-stage disease; DRB1*04 had a lower risk for tumor relapse; and DQB1*03 and DQB1*0502 were more frequent in controls than in patients. However, these associations reached only borderline significance in our HNSCC patients.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since national differences exist in genes, environment, diet and life habits and also in the use of postmenopausal hormone therapy (HT), the associations between different hormone therapies and the risk for breast cancer were studied among Finnish postmenopausal women. All Finnish women over 50 years of age who used HT were identified from the national medical reimbursement register, established in 1994, and followed up for breast cancer incidence (n= 8,382 cases) until 2005 with the aid of the Finnish Cancer Registry. The risk for breast cancer in HT users was compared to that in the general female population of the same age. Among women using oral or transdermal estradiol alone (ET) (n = 110,984) during the study period 1994-2002 the standardized incidence ratio (SIR) for breast cancer in users for < 5 years was 0.93 (95% confidence interval (CI) 0.80–1.04), and in users for ≥ 5 years 1.44 (1.29–1.59). This therapy was associated with similar rises in ductal and lobular types of breast cancer. Both localized stage (1.45; 1.26–1.66) and cancers spread to regional nodes (1.35; 1.09–1.65) were associated with the use of systemic ET. Oral estriol or vaginal estrogens were not accompanied with a risk for breast cancer. The use of estrogen-progestagen therapy (EPT) in the study period 1994-2005 (n= 221,551) was accompanied with an increased incidence of breast cancer (1.31;1.20-1.42) among women using oral or transdermal EPT for 3-5 years, and the incidence increased along with the increasing duration of exposure (≥10 years, 2.07;1.84-2.30). Continuous EPT entailed a significantly higher (2.44; 2.17-2.72) breast cancer incidence compared to sequential EPT (1.78; 1.64-1.90) after 5 years of use. The use of norethisterone acetate (NETA) as a supplement to estradiol was accompanied with a higher incidence of breast cancer after 5 years of use (2.03; 1.88-2.18) than that of medroxyprogesterone acetate (MPA) (1.64; 1.49-1.79). The SIR for the lobular type of breast cancer was increased within 3 years of EPT exposure (1.35; 1.18-1.53), and the incidence of the lobular type of breast cancer (2.93; 2.33-3.64) was significantly higher than that of the ductal type (1.92; 1.67-2.18) after 10 years of exposure. To control for some confounding factors, two case control studies were performed. All Finnish women between the ages of 50-62 in 1995-2007 and diagnosed with a first invasive breast cancer (n= 9,956) were identified from the Finnish Cancer Registry, and 3 controls of similar age (n=29,868) without breast cancer were retrieved from the Finnish national population registry. Subjects were linked to the medical reimbursement register for defining the HT use. The use of ET was not associated with an increased risk for breast cancer (1.00; 0.92-1.08). Neither was progestagen-only therapy used less than 3 years. However, the use of tibolone was associated with an elevated risk for breast cancer (1.39; 1.07-1.81). The case-control study confirmed the results of EPT regarding sequential vs. continuous use of progestagen, including progestagen released continuously by an intrauterine device; the increased risk was seen already within 3 years of use (1.65;1.32-2.07). The dose of NETA was not a determinant as regards the breast cancer risk. Both systemic ET, and EPT are associated with an elevation in the risk for breast cancer. These risks resemble to a large extent those seen in several other countries. The use of an intrauterine system alone or as a complement to systemic estradiol is also associated with a breast cancer risk. These data emphasize the need for detailed information to women who are considering starting the use of HT.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is a growing need to understand the exchange processes of momentum, heat and mass between an urban surface and the atmosphere as they affect our quality of life. Understanding the source/sink strengths as well as the mixing mechanisms of air pollutants is particularly important due to their effects on human health and climate. This work aims to improve our understanding of these surface-atmosphere interactions based on the analysis of measurements carried out in Helsinki, Finland. The vertical exchange of momentum, heat, carbon dioxide (CO2) and aerosol particle number was measured with the eddy covariance technique at the urban measurement station SMEAR III, where the concentrations of ultrafine, accumulation mode and coarse particle numbers, nitrogen oxides (NOx), carbon monoxide (CO), ozone (O3) and sulphur dioxide (SO2) were also measured. These measurements were carried out over varying measurement periods between 2004 and 2008. In addition, black carbon mass concentration was measured at the Helsinki Metropolitan Area Council site during three campaigns in 1996-2005. Thus, the analyzed dataset covered far, the most comprehensive long-term measurements of turbulent fluxes reported in the literature from urban areas. Moreover, simultaneously measured urban air pollution concentrations and turbulent fluxes were examined for the first time. The complex measurement surrounding enabled us to study the effect of different urban covers on the exchange processes from a single point of measurement. The sensible and latent heat fluxes closely followed the intensity of solar radiation, and the sensible heat flux always exceeded the latent heat flux due to anthropogenic heat emissions and the conversion of solar radiation to direct heat in urban structures. This urban heat island effect was most evident during winter nights. The effect of land use cover was seen as increased sensible heat fluxes in more built-up areas than in areas with high vegetation cover. Both aerosol particle and CO2 exchanges were largely affected by road traffic, and the highest diurnal fluxes reached 109 m-2 s-1 and 20 µmol m-2 s-1, respectively, in the direction of the road. Local road traffic had the greatest effect on ultrafine particle concentrations, whereas meteorological variables were more important for accumulation mode and coarse particle concentrations. The measurement surroundings of the SMEAR III station served as a source for both particles and CO2, except in summer, when the vegetation uptake of CO2 exceeded the anthropogenic sources in the vegetation sector in daytime, and we observed a downward median flux of 8 µmol m-2 s-1. This work improved our understanding of the interactions between an urban surface and the atmosphere in a city located at high latitudes in a semi-continental climate. The results can be utilised in urban planning, as the fraction of vegetation cover and vehicular activity were found to be the major environmental drivers affecting most of the exchange processes. However, in order to understand these exchange and mixing processes on a city scale, more measurements above various urban surfaces accompanied by numerical modelling are required.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aerosol particles in the atmosphere are known to significantly influence ecosystems, to change air quality and to exert negative health effects. Atmospheric aerosols influence climate through cooling of the atmosphere and the underlying surface by scattering of sunlight, through warming of the atmosphere by absorbing sun light and thermal radiation emitted by the Earth surface and through their acting as cloud condensation nuclei. Aerosols are emitted from both natural and anthropogenic sources. Depending on their size, they can be transported over significant distances, while undergoing considerable changes in their composition and physical properties. Their lifetime in the atmosphere varies from a few hours to a week. New particle formation is a result of gas-to-particle conversion. Once formed, atmospheric aerosol particles may grow due to condensation or coagulation, or be removed by deposition processes. In this thesis we describe analyses of air masses, meteorological parameters and synoptic situations to reveal conditions favourable for new particle formation in the atmosphere. We studied the concentration of ultrafine particles in different types of air masses, and the role of atmospheric fronts and cloudiness in the formation of atmospheric aerosol particles. The dominant role of Arctic and Polar air masses causing new particle formation was clearly observed at Hyytiälä, Southern Finland, during all seasons, as well as at other measurement stations in Scandinavia. In all seasons and on multi-year average, Arctic and North Atlantic areas were the sources of nucleation mode particles. In contrast, concentrations of accumulation mode particles and condensation sink values in Hyytiälä were highest in continental air masses, arriving at Hyytiälä from Eastern Europe and Central Russia. The most favourable situation for new particle formation during all seasons was cold air advection after cold-front passages. Such a period could last a few days until the next front reached Hyytiälä. The frequency of aerosol particle formation relates to the frequency of low-cloud-amount days in Hyytiälä. Cloudiness of less than 5 octas is one of the factors favouring new particle formation. Cloudiness above 4 octas appears to be an important factor that prevents particle growth, due to the decrease of solar radiation, which is one of the important meteorological parameters in atmospheric particle formation and growth. Keywords: Atmospheric aerosols, particle formation, air mass, atmospheric front, cloudiness

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Polar Regions are an energy sink of the Earth system, as the Sun rays do not reach the Poles for half of the year, and hit them only at very low angles for the other half of the year. In summer, solar radiation is the dominant energy source for the Polar areas, therefore even small changes in the surface albedo strongly affect the surface energy balance and, thus, the speed and amount of snow and ice melting. In winter, the main heat sources for the atmosphere are the cyclones approaching from lower latitudes, and the atmosphere-surface heat transfer takes place through turbulent mixing and longwave radiation, the latter dominated by clouds. The aim of this thesis is to improve the knowledge about the surface and atmospheric processes that control the surface energy budget over snow and ice, with particular focus on albedo during the spring and summer seasons, on horizontal advection of heat, cloud longwave forcing, and turbulent mixing during the winter season. The critical importance of a correct albedo representation in models is illustrated through the analysis of the causes for the errors in the surface and near-surface air temperature produced in a short-range numerical weather forecast by the HIRLAM model. Then, the daily and seasonal variability of snow and ice albedo have been examined by analysing field measurements of albedo, carried out in different environments. On the basis of the data analysis, simple albedo parameterizations have been derived, which can be implemented into thermodynamic sea ice models, as well as numerical weather prediction and climate models. Field measurements of radiation and turbulent fluxes over the Bay of Bothnia (Baltic Sea) also allowed examining the impact of a large albedo change during the melting season on surface energy and ice mass budgets. When high contrasts in surface albedo are present, as in the case of snow covered areas next to open water, the effect of the surface albedo heterogeneity on the downwelling solar irradiance under overcast condition is very significant, although it is usually not accounted for in single column radiative transfer calculations. To account for this effect, an effective albedo parameterization based on three-dimensional Monte Carlo radiative transfer calculations has been developed. To test a potentially relevant application of the effective albedo parameterization, its performance in the ground-based retrieval of cloud optical depth was illustrated. Finally, the factors causing the large variations of the surface and near-surface temperatures over the Central Arctic during winter were examined. The relative importance of cloud radiative forcing, turbulent mixing, and lateral heat advection on the Arctic surface temperature were quantified through the analysis of direct observations from Russian drifting ice stations, with the lateral heat advection calculated from reanalysis products.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A large fraction of an XML document typically consists of text data. The XPath query language allows text search via the equal, contains, and starts-with predicates. Such predicates can be efficiently implemented using a compressed self-index of the document's text nodes. Most queries, however, contain some parts querying the text of the document, plus some parts querying the tree structure. It is therefore a challenge to choose an appropriate evaluation order for a given query, which optimally leverages the execution speeds of the text and tree indexes. Here the SXSI system is introduced. It stores the tree structure of an XML document using a bit array of opening and closing brackets plus a sequence of labels, and stores the text nodes of the document using a global compressed self-index. On top of these indexes sits an XPath query engine that is based on tree automata. The engine uses fast counting queries of the text index in order to dynamically determine whether to evaluate top-down or bottom-up with respect to the tree structure. The resulting system has several advantages over existing systems: (1) on pure tree queries (without text search) such as the XPathMark queries, the SXSI system performs on par or better than the fastest known systems MonetDB and Qizx, (2) on queries that use text search, SXSI outperforms the existing systems by 1-3 orders of magnitude (depending on the size of the result set), and (3) with respect to memory consumption, SXSI outperforms all other systems for counting-only queries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Interaction between forests and the atmosphere occurs by radiative and turbulent transport. The fluxes of energy and mass between surface and the atmosphere directly influence the properties of the lower atmosphere and in longer time scales the global climate. Boreal forest ecosystems are central in the global climate system, and its responses to human activities, because they are significant sources and sinks of greenhouse gases and of aerosol particles. The aim of the present work was to improve our understanding on the existing interplay between biologically active canopy, microenvironment and turbulent flow and quantify. In specific, the aim was to quantify the contribution of different canopy layers to whole forest fluxes. For this purpose, long-term micrometeorological and ecological measurements made in a Scots pine (Pinus sylvestris) forest at SMEAR II research station in Southern Finland were used. The properties of turbulent flow are strongly modified by the interaction between the canopy elements: momentum is efficiently absorbed in the upper layers of the canopy, mean wind speed and turbulence intensities decrease rapidly towards the forest floor and power spectra is modulated by spectral short-cut . In the relative open forest, diabatic stability above the canopy explained much of the changes in velocity statistics within the canopy except in strongly stable stratification. Large eddies, ranging from tens to hundred meters in size, were responsible for the major fraction of turbulent transport between a forest and the atmosphere. Because of this, the eddy-covariance (EC) method proved to be successful for measuring energy and mass exchange inside a forest canopy with exception of strongly stable conditions. Vertical variations of within canopy microclimate, light attenuation in particular, affect strongly the assimilation and transpiration rates. According to model simulations, assimilation rate decreases with height more rapidly than stomatal conductance (gs) and transpiration and, consequently, the vertical source-sink distributions for carbon dioxide (CO2) and water vapor (H2O) diverge. Upscaling from a shoot scale to canopy scale was found to be sensitive to chosen stomatal control description. The upscaled canopy level CO2 fluxes can vary as much as 15 % and H2O fluxes 30 % even if the gs models are calibrated against same leaf-level dataset. A pine forest has distinct overstory and understory layers, which both contribute significantly to canopy scale fluxes. The forest floor vegetation and soil accounted between 18 and 25 % of evapotranspiration and between 10 and 20 % of sensible heat exchange. Forest floor was also an important deposition surface for aerosol particles; between 10 and 35 % of dry deposition of particles within size range 10 30 nm occurred there. Because of the northern latitudes, seasonal cycle of climatic factors strongly influence the surface fluxes. Besides the seasonal constraints, partitioning of available energy to sensible and latent heat depends, through stomatal control, on the physiological state of the vegetation. In spring, available energy is consumed mainly as sensible heat and latent heat flux peaked about two months later, in July August. On the other hand, annual evapotranspiration remains rather stable over range of environmental conditions and thus any increase of accumulated radiation affects primarily the sensible heat exchange. Finally, autumn temperature had strong effect on ecosystem respiration but its influence on photosynthetic CO2 uptake was restricted by low radiation levels. Therefore, the projected autumn warming in the coming decades will presumably reduce the positive effects of earlier spring recovery in terms of carbon uptake potential of boreal forests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Agriculture’s contribution to climate change is controversial as it is a significant source of greenhouse gases but also a sink of carbon. Hence its economic and technological potential to mitigate climate change have been argued to be noteworthy. However, social profitability of emission mitigation is a result from factors among emission reductions such as surface water quality impact or profit from production. Consequently, to value comprehensive results of agricultural climate emission mitigation practices, these co-effects to environment and economics should be taken into account. The objective of this thesis was to develop an integrated economic and ecological model to analyse the social welfare of crop cultivation in Finland on distinctive cultivation technologies, conventional tillage and conservation tillage (no-till). Further, we ask whether it would be privately or socially profitable to allocate some of barley cultivation for alternative land use, such as green set-aside or afforestation, when production costs, GHG’s and water quality impacts are taken into account. In the theoretical framework we depict the optimal input use and land allocation choices in terms of environmental impacts and profit from production and derive the optimal tax and payment policies for climate and water quality friendly land allocation. The empirical application of the model uses Finnish data about production cost and profit structure and environmental impacts. According to our results, given emission mitigation practices are not self-evidently beneficial for farmers or society. On the contrary, in some cases alternative land allocation could even reduce social welfare, profiting conventional crop cultivation. This is the case regarding mineral soils such as clay and silt soils. On organic agricultural soils, climate mitigation practices, in this case afforestation and green fallow give more promising results, decreasing climate emissions and nutrient runoff to water systems. No-till technology does not seem to profit climate mitigation although it does decrease other environmental impacts. Nevertheless, the data behind climate emission mitigation practices impact to production and climate is limited and partly contradictory. More specific experiment studies on interaction of emission mitigation practices and environment would be needed. Further study would be important. Particularly area specific production and environmental factors and also food security and safety and socio-economic impacts should be taken into account.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Atmospheric particles affect the radiation balance of the Earth and thus the climate. New particle formation from nucleation has been observed in diverse atmospheric conditions but the actual formation path is still unknown. The prevailing conditions can be exploited to evaluate proposed formation mechanisms. This study aims to improve our understanding of new particle formation from the view of atmospheric conditions. The role of atmospheric conditions on particle formation was studied by atmospheric measurements, theoretical model simulations and simulations based on observations. Two separate column models were further developed for aerosol and chemical simulations. Model simulations allowed us to expand the study from local conditions to varying conditions in the atmospheric boundary layer, while the long-term measurements described especially characteristic mean conditions associated with new particle formation. The observations show statistically significant difference in meteorological and back-ground aerosol conditions between observed event and non-event days. New particle formation above boreal forest is associated with strong convective activity, low humidity and low condensation sink. The probability of a particle formation event is predicted by an equation formulated for upper boundary layer conditions. The model simulations call into question if kinetic sulphuric acid induced nucleation is the primary particle formation mechanism in the presence of organic vapours. Simultaneously the simulations show that ignoring spatial and temporal variation in new particle formation studies may lead to faulty conclusions. On the other hand, the theoretical simulations indicate that short-scale variations in temperature and humidity unlikely have a significant effect on mean binary water sulphuric acid nucleation rate. The study emphasizes the significance of mixing and fluxes in particle formation studies, especially in the atmospheric boundary layer. The further developed models allow extensive aerosol physical and chemical studies in the future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The primary aim of this thesis was the evaluation of the perfusion of normal organs in cats using contrast-enhanced ultrasound (CEUS), to serve as a reference for later clinical studies. Little is known of the use of CEUS in cats, especially regarding its safety and the effects of anesthesia on the procedure, thus, secondary aims here were to validate the quantitative analyzing method, to investigate the biological effects of CEUS on feline kidneys, and to assess the effect of anesthesia on splenic perfusion in cats undergoing CEUS. -- The studies were conducted on healthy, young, purpose-bred cats. CEUS of the liver, left kidney, spleen, pancreas, small intestine, and mesenteric lymph nodes was performed to characterize the normal perfusion of these organs on ten anesthetized, male cats. To validate the quantification method, the effects of placement and size of the region of interest (ROI) on perfusion parameters were investigated using CEUS: Three separate sets of ROIs were placed in the kidney cortex, varying in location, size, or depth. The biological effects of CEUS on feline kidneys were estimated by measuring urinary enzymatic activities, analyzing urinary specific gravity, pH, protein, creatinine, albumin, and sediment, and measuring plasma urea and creatinine concentrations before and after CEUS. Finally, the impact of anesthesia on contrast enhancement of the spleen was investigated by imaging cats with CEUS first awake and later under anesthesia on separate days. -- Typical perfusion patterns were found for each of the studied organs. The liver had a gradual and more heterogeneous perfusion pattern due to its dual blood flow and close proximity to the diaphragm. An obvious and statistically significant difference emerged in the perfusion between the kidney cortex and medulla. Enhancement in the spleen was very heterogeneous at the beginning of imaging, indicating focal dissimilarities in perfusion. No significant differences emerged in the perfusion parameters between the pancreas, small intestine, and mesenteric lymph nodes. -- The ROI placement and size were found to have an influence on the quantitative measurements of CEUS. Increasing the depth or the size of the ROI decreased the peak intensity value significantly, suggesting that where and how the ROI is placed does matter in quantitative analyses. --- A significant increase occurred in the urinary N-acetyl-β-D-glucosaminidase (NAG) to creatinine ratio after CEUS. No changes were noted in the serum biochemistry profile after CEUS, with the exception of a small decrease in blood urea concentration. The magnitude of the rise in the NAG/creatinine ratio was, however, less than the circadian variation reported earlier in healthy cats. Thus, the changes observed in the laboratory values after CEUS of the left kidney did not indicate any detrimental effects in kidneys. Heterogeneity of the spleen was observed to be less and time of first contrast appearance earlier in nonanesthetized cats than in anesthetized ones, suggesting that anesthesia increases heterogeneity of the feline spleen in CEUS. ---- In conclusion, the results suggest that CEUS can be used also in feline veterinary patients as an additional diagnostics aid. The perfusion patterns found in the imaged organs were typical and similar to those seen earlier in other species, with the exception of the heterogeneous perfusion pattern in the cat spleen. Differences in the perfusion between organs corresponded with physiology. Based on the results, estimation of focal perfusion defects of the spleen in cats should be performed with caution and after the disappearance of the initial heterogeneity, especially in anesthetized or sedated cats. Finally, these results indicate that CEUS can be used safely to analyze kidney perfusion also in cats. Future clinical studies are needed to evaluate the full potential of CEUS in feline medicine as a tool for diagnosing lesions in various organ systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Physics teachers are in a key position to form the attitudes and conceptions of future generations toward science and technology, as well as to educate future generations of scientists. Therefore, good teacher education is one of the key areas of physics departments education program. This dissertation is a contribution to the research-based development of high quality physics teacher education, designed to meet three central challenges of good teaching. The first challenge relates to the organization of physics content knowledge. The second challenge, connected to the first one, is to understand the role of experiments and models in (re)constructing the content knowledge of physics for purposes of teaching. The third challenge is to provide for pre-service physics teachers opportunities and resources for reflecting on or assessing their knowledge and experience about physics and physics education. This dissertation demonstrates how these challenges can be met when the content knowledge of physics, the relevant epistemological aspects of physics and the pedagogical knowledge of teaching and learning physics are combined. The theoretical part of this dissertation is concerned with designing two didactical reconstructions for purposes of physics teacher education: the didactical reconstruction of processes (DRoP) and the didactical reconstruction of structures (DRoS). This part starts with taking into account the required professional competencies of physics teachers, the pedagogical aspects of teaching and learning, and the benefits of the graphical ways of representing knowledge. Then it continues with the conceptual and philosophical analysis of physics, especially with the analysis of experiments and models role in constructing knowledge. This analysis is condensed in the form of the epistemological reconstruction of knowledge justification. Finally, these two parts are combined in the designing and production of the DRoP and DRoS. The DRoP captures the knowledge formation of physical concepts and laws in concise and simplified form while still retaining authenticity from the processes of how concepts have been formed. The DRoS is used for representing the structural knowledge of physics, the connections between physical concepts, quantities and laws, to varying extents. Both DRoP and DRoS are represented in graphical form by means of flow charts consisting of nodes and directed links connecting the nodes. The empirical part discusses two case studies that show how the three challenges are met through the use of DRoP and DRoS and how the outcomes of teaching solutions based on them are evaluated. The research approach is qualitative; it aims at the in-depth evaluation and understanding about the usefulness of the didactical reconstructions. The data, which were collected from the advanced course for prospective physics teachers during 20012006, consisted of DRoP and DRoS flow charts made by students and student interviews. The first case study discusses how student teachers used DRoP flow charts to understand the process of forming knowledge about the law of electromagnetic induction. The second case study discusses how student teachers learned to understand the development of physical quantities as related to the temperature concept by using DRoS flow charts. In both studies, the attention is focused on the use of DRoP and DRoS to organize knowledge and on the role of experiments and models in this organization process. The results show that students understanding about physics knowledge production improved and their knowledge became more organized and coherent. It is shown that the flow charts and the didactical reconstructions behind them had an important role in gaining these positive learning results. On the basis of the results reported here, the designed learning tools have been adopted as a standard part of the teaching solutions used in the physics teacher education courses in the Department of Physics, University of Helsinki.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lakes serve as sites for terrestrially fixed carbon to be remineralized and transferred back to the atmosphere. Their role in regional carbon cycling is especially important in the Boreal Zone, where lakes can cover up to 20% of the land area. Boreal lakes are often characterized by the presence of a brown water colour, which implies high levels of dissolved organic carbon from the surrounding terrestrial ecosystem, but the load of inorganic carbon from the catchment is largely unknown. Organic carbon is transformed to methane (CH4) and carbon dioxide (CO2) in biological processes that result in lake water gas concentrations that increase above atmospheric equilibrium, thus making boreal lakes as sources of these important greenhouse gases. However, flux estimates are often based on sporadic sampling and modelling and actual flux measurements are scarce. Thus, the detailed temporal flux dynamics of greenhouse gases are still largely unknown. ----- One aim here was to reveal the natural dynamics of CH4 and CO2 concentrations and fluxes in a small boreal lake. The other aim was to test the applicability of a measuring technique for CO2 flux, i.e. the eddy covariance (EC) technique, and a computational method for estimation of primary production and community respiration, both commonly used in terrestrial research, in this lake. Continuous surface water CO2 concentration measurements, also needed in free-water applications to estimate primary production and community respiration, were used over two open water periods in a study of CO2 concentration dynamics. Traditional methods were also used to measure gas concentration and fluxes. The study lake, Valkea-Kotinen, is a small, humic, headwater lake within an old-growth forest catchment with no local anthropogenic disturbance and thus possible changes in gas dynamics reflect the natural variability in lake ecosystems. CH4 accumulated under the ice and in the hypolimnion during summer stratification. The surface water CH4 concentration was always above atmospheric equilibrium and thus the lake was a continuous source of CH4 to the atmosphere. However, the annual CH4 fluxes were small, i.e. 0.11 mol m-2 yr-1, and the timing of fluxes differed from that of other published estimates. The highest fluxes are usually measured in spring after ice melt but in Lake Valkea-Kotinen CH4 was effectively oxidised in spring and highest effluxes occurred in autumn after summer stratification period. CO2 also accumulated under the ice and the hypolimnetic CO2 concentration increased steadily during stratification period. The surface water CO2 concentration was highest in spring and in autumn, whereas during the stable stratification it was sometimes under atmospheric equilibrium. It showed diel, daily and seasonal variation; the diel cycle was clearly driven by light and thus reflected the metabolism of the lacustrine ecosystem. However, the diel cycle was sometimes blurred by injection of hypolimnetic water rich in CO2 and the surface water CO2 concentration was thus controlled by stratification dynamics. The highest CO2 fluxes were measured in spring, autumn and during those hypolimnetic injections causing bursts of CO2 comparable with the spring and autumn fluxes. The annual fluxes averaged 77 (±11 SD) g C m-2 yr-1. In estimating the importance of the lake in recycling terrestrial carbon, the flux was normalized to the catchment area and this normalized flux was compared with net ecosystem production estimates of -50 to 200 g C m-2 yr-1 from unmanaged forests in corresponding temperature and precipitation regimes in the literature. Within this range the flux of Lake Valkea-Kotinen yielded from the increase in source of the surrounding forest by 20% to decrease in sink by 5%. The free water approach gave primary production and community respiration estimates of 5- and 16-fold, respectively, compared with traditional bottle incubations during a 5-day testing period in autumn. The results are in parallel with findings in the literature. Both methods adopted from the terrestrial community also proved useful in lake studies. A large percentage of the EC data was rejected, due to the unfulfilled prerequisites of the method. However, the amount of data accepted remained large compared with what would be feasible with traditional methods. Use of the EC method revealed underestimation of the widely used gas exchange model and suggests simultaneous measurements of actual turbulence at the water surface with comparison of the different gas flux methods to revise the parameterization of the gas transfer velocity used in the models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A distributed system is a collection of networked autonomous processing units which must work in a cooperative manner. Currently, large-scale distributed systems, such as various telecommunication and computer networks, are abundant and used in a multitude of tasks. The field of distributed computing studies what can be computed efficiently in such systems. Distributed systems are usually modelled as graphs where nodes represent the processors and edges denote communication links between processors. This thesis concentrates on the computational complexity of the distributed graph colouring problem. The objective of the graph colouring problem is to assign a colour to each node in such a way that no two nodes connected by an edge share the same colour. In particular, it is often desirable to use only a small number of colours. This task is a fundamental symmetry-breaking primitive in various distributed algorithms. A graph that has been coloured in this manner using at most k different colours is said to be k-coloured. This work examines the synchronous message-passing model of distributed computation: every node runs the same algorithm, and the system operates in discrete synchronous communication rounds. During each round, a node can communicate with its neighbours and perform local computation. In this model, the time complexity of a problem is the number of synchronous communication rounds required to solve the problem. It is known that 3-colouring any k-coloured directed cycle requires at least ½(log* k - 3) communication rounds and is possible in ½(log* k + 7) communication rounds for all k ≥ 3. This work shows that for any k ≥ 3, colouring a k-coloured directed cycle with at most three colours is possible in ½(log* k + 3) rounds. In contrast, it is also shown that for some values of k, colouring a directed cycle with at most three colours requires at least ½(log* k + 1) communication rounds. Furthermore, in the case of directed rooted trees, reducing a k-colouring into a 3-colouring requires at least log* k + 1 rounds for some k and possible in log* k + 3 rounds for all k ≥ 3. The new positive and negative results are derived using computational methods, as the existence of distributed colouring algorithms corresponds to the colourability of so-called neighbourhood graphs. The colourability of these graphs is analysed using Boolean satisfiability (SAT) solvers. Finally, this thesis shows that similar methods are applicable in capturing the existence of distributed algorithms for other graph problems, such as the maximal matching problem.