21 resultados para frequency analysis problem

em Helda - Digital Repository of University of Helsinki


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis explores migration and the attractiveness of urban living in the Greater Helsinki region. The aim of the thesis is to explore the attractiveness of the city of Helsinki in terms of regional migration and to identify what characterizes migration to Helsinki. The study focuses in particular on housing, which is a key factor influencing migration decisions in the region. Other central themes in the study are housing policy and regional competition among municipalities. This study focuses solely on households moving within Finnish borders excluding international migration. Migration is examined by comparing in- and out-migration in Helsinki, as well as studying migration to the city s inner and outer areas. The primary research material in the study is a questionnaire data collected by the National Consumer Research Centre. In this thesis the data is used for studying migrants aged 25 45. The main research method is analyzing the data statistically using the SPSS software. Methods include frequency analysis, cross tabulation, factor analysis and descriptive analysis. Additionally, statistical data is used to complement the questionnaire data. The research results indicate that Helsinki s in- and out-migration differs both in terms of the type of households that migrate as well as in the reasons why they migrate. Furthermore, differences can also be detected between migration to the inner and outer parts of Helsinki. According to the research results, a household s current phase of life is crucial in determining where and why they move within the Greater Helsinki region. A household s set of values on the other hand, seems to have a lesser impact on migration within the region, even though households moving to Helsinki seem to value a somewhat more urban lifestyle than the ones moving out of the city. The research also shows a direct correlation between the values of migrants and their current phase of life. Decisions of migrating are heavily influenced by wider societal issues. In the Greater Helsinki region the labor and housing market appear to have a great influence on the direction of migration streams. According to the results, households move to and from Helsinki for different reasons. The primary reasons for moving to Helsinki are related to the city s diverse labor market and to the working careers of households. Issues related to urban living and an urban lifestyle seem to be relevant although not the main reason why people move to Helsinki. The research material indicates that Helsinki s urban environment is both a pull and a push factor affecting the decisions of migrants. The city attracts those seeking urban living, but on the contrary does not appeal to households seeking more space and wishing to live closer to nature. According to the research, Helsinki with its densely built urban environment mainly attracts singles and childless couples, whereas the city region s other municipalities are more attractive for families with children. Housing policy is one of the main reasons determining where people move within the Helsinki region. As for the city of Helsinki, improving the city s attractiveness seems to be closely linked to how well the city manages to execute its future housing policies and how well alternative living preferences can be taken into account in planning.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper addresses several questions in the compensation literature by examining stock option compensation practices of Finnish firms. First, the results indicate that principal-agent theory succeeds quite well in predicting the use of stock options. Proxies for monitoring costs, growth opportunities, ownership structure, and risk are found to determine the use of incentives consistent with theory. Furthermore, the paper examines whether determinants of stock options targeted to top management differ from determinants of broad-based stock option plans. Some evidence is found that factors driving these two types of incentives differ. Second, the results reveal that systematic risk significantly increases the likelihood that firms adopt stock option plans, whereas total firm risk and unsystematic risk do not seem to affect this decision. Third, the results show that growth opportunities are related to time-dimensional contracting frequency, consistent with the argument that incentive levels deviate more rapidly from optimum in firms with high growth opportunities. Finally, the results suggest that vesting schedules are decreasing in financial leverage, and that contract maturity is decreasing in firm focus. In addition, both vesting schedules and contract maturity tend to be longer in firms involving state ownership.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Design embraces several disciplines dedicated to the production of artifacts and services. These disciplines are quite independent and only recently has psychological interest focused on them. Nowadays, the psychological theories of design, also called design cognition literature, describe the design process from the information processing viewpoint. These models co-exist with the normative standards of how designs should be crafted. In many places there are concrete discrepancies between these two in a way that resembles the differences between the actual and ideal decision-making. This study aimed to explore the possible difference related to problem decomposition. Decomposition is a standard component of human problem-solving models and is also included in the normative models of design. The idea of decomposition is to focus on a single aspect of the problem at a time. Despite its significance, the nature of decomposition in conceptual design is poorly understood and has only been preliminary investigated. This study addressed the status of decomposition in conceptual design of products using protocol analysis. Previous empirical investigations have argued that there are implicit and explicit decomposition, but have not provided a theoretical basis for these two. Therefore, the current research began by reviewing the problem solving and design literature and then composing a cognitive model of the solution search of conceptual design. The result is a synthetic view which describes recognition and decomposition as the basic schemata for conceptual design. A psychological experiment was conducted to explore decomposition. In the test, sixteen (N=16) senior students of mechanical engineering created concepts for two alternative tasks. The concurrent think-aloud method and protocol analysis were used to study decomposition. The results showed that despite the emphasis on decomposition in the formal education, only few designers (N=3) used decomposition explicitly and spontaneously in the presented tasks, although the designers in general applied a top-down control strategy. Instead, inferring from the use of structured strategies, the designers always relied on implicit decomposition. These results confirm the initial observations found in the literature, but they also suggest that decomposition should be investigated further. In the future, the benefits and possibilities of explicit decomposition should be considered along with the cognitive mechanisms behind decomposition. After that, the current results could be reinterpreted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The issue of the usefulness of different prosopis species versus their status as weeds is a matter of hot debate around the world. The tree Prosopis juliflora had until 2000 been proclaimed weedy in its native range in South America and elsewhere in the dry tropics. P. juliflora or mesquite has a 90-year history in Sudan. During the early 1990s a popular opinion in central Sudan and the Sudanese Government had begun to consider prosopis a noxious weed and a problematic tree species due to its aggressive ability to invade farmlands and pastures, especially in and around irrigated agricultural lands. As a consequence prosopis was officially declared an invasive alien species also in Sudan, and in 1995 a presidential decree for its eradication was issued. Using a total economic valuation (TEV) approach, this study analysed the impacts of prosopis on the local livelihoods in two contrasting irrigated agricultural schemes. Primarily a problem-based approach was used in which the derivation of non-market values was captured using ecological economic tools. In the New Halfa Irrigation Scheme in Kassala State, four separate household surveys were conducted due to diversity between the respective population groups. The main aim was here to study the magnitude of environmental economic benefits and costs derived from the invasion of prosopis in a large agricultural irrigation scheme on clay soil. Another study site, the Gandato Irrigation Scheme in River Nile State represented impacts from prosopis that an irrigation scheme was confronted with on sandy soil in the arid and semi-arid ecozones along the main River Nile. The two cases showed distinctly different effects of prosopis but both indicated the benefits to exceed the costs. The valuation on clay soil in New Halfa identified a benefit/cost ratio of 2.1, while this indicator equalled 46 on the sandy soils of Gandato. The valuation results were site-specific and based on local market prices. The most important beneficial impacts of prosopis on local livelihoods were derived from free-grazing forage for livestock, environmental conservation of the native vegetation, wood and non-wood forest products, as well as shelterbelt effects. The main social costs from prosopis were derived from weeding and clearing it from farm lands and from canalsides, from thorn injuries to humans and livestock, as well as from repair expenses vehicle tyre punctures. Of the population groups, the tenants faced most of the detrimental impacts, while the landless population groups (originating from western and eastern Sudan) as well as the nomads were highly dependent on this tree resource. For the Gandato site the monetized benefit-cost ratio of 46 still excluded several additional beneficial impacts of prosopis in the area that were difficult to quantify and monetize credibly. In River Nile State the beneficial impact could thus be seen as completely outweighing the costs of prosopis. The results can contributed to the formulation of national and local forest and agricultural policies related to prosopis in Sudan and also be used in other countries faced with similar impacts caused by this tree.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pectobacterium atrosepticum on Gram-negatiivinen bakteeri, joka aiheuttaa perunan tyvi- ja märkämätää. P. atrosepticum bakteerin optimilämpötila on melko alhainen ja se on yleinen lauhkeilla alueilla. Tyvimätä leviää pääasiassa siemenperunan välityksellä ja siksi se on ongelma erityisesti siemenperunan tuotannossa. P. atrosepticum kannan SCRI1043 genomi on julkaistu ja sitä tutkitaan malliorganismina märkä- ja tyvimädän taudinaiheuttamisen ymmärtämiseksi. Tämä opportunistinen taudinaiheuttaja voi elää isäntäkasvissa kuukausia piilevänä, aiheuttamatta näkyviä oireita. Suotuisissa olosuhteissa bakteerit alkavat jakautua ja tuottaa kasvin kudoksia hajottavia entsyymejä. Mädäntyvä kasvimassa tarjoaa ravinteita bakteerien kasvuun ja mahdollistaa isäntäkasvin asuttamisen. Soluseiniä hajottavien entsyymien merkitys taudinaiheuttamisessa on hyvin tunnettu, mutta oireettomasta jaksosta ja taudin alkuvaiheista tiedätään vain vähän. Bakteerin genomi sisältää monia toksiineja, adhesiineja, hemolysiineja ja muita proteiineja, joilla saattaa olla merkitys taudinaiheuttamisessa. Tässä työssä käytettiin proteomiikkaa ja mikrosiruanalysiä P. atrosepticum bakteerin erittyvien proteiinien ja geeniekspression tutkimiseen. Proteiinit, jotka eritetään ulos bakteerista, toimivat todennäköisesti taudinaiheuttamisessa, koska ne ovat suorassa kontaktissa isäntäkasvin kanssa. Analyysit suoritettiin olosuhteissa, jotka muistuttavat kasvin soluvälitilaa: matala pH, vähän ravinteita ja matala lämpötila. Isäntäkasvin läsnäolon vaikutusta proteiinien tuottoon ja geeniekspressioon tutkittiin lisäämällä perunauutetta kasvatusalustaan. Tutkimuksessa tunnistettiin P. atrosepticum bakteerin monia jo tunnettuja ja mahdollisesti taudinaiheuttamiseen liittyviä proteiineja. Perunauute lisäsi hiljattain tunnistetun, proteiinien eritysreittiä (tyyppi VI sekreetio, T6SS) koodaavien geenien ilmentymistä. Lisäksi bakteerin havaittiin erittävän useita T6SS:n liittyviä proteiineja kasvualustaan, johon oli lisätty perunauutetta. T6SS:n merkitys bakteereille on vielä epäselvä ja sen vaikutuksesta taudinaiheuttamiseen on julkaistu ristiriitaisia tuloksia. Märkä- ja tyvimädän ymmärtäminen molekulaarisella tasolla luo pohjan tautien kontrollointiin tähtäävään soveltavaan tutkimukseen. Tämä tutkimus lisää tietoa kasvi-patogeeni- interaktiosta ja sitä voidaan tulevaisuudessa käyttää hyväksi esimerkiksi diagnostiikassa, resistenttien perunalajikkeiden jalostuksessa tai viljely- ja varastointiolosuhteiden parantamisessa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is to develop a fully automatic lameness detection system that operates in a milking robot. The instrumentation, measurement software, algorithms for data analysis and a neural network model for lameness detection were developed. Automatic milking has become a common practice in dairy husbandry, and in the year 2006 about 4000 farms worldwide used over 6000 milking robots. There is a worldwide movement with the objective of fully automating every process from feeding to milking. Increase in automation is a consequence of increasing farm sizes, the demand for more efficient production and the growth of labour costs. As the level of automation increases, the time that the cattle keeper uses for monitoring animals often decreases. This has created a need for systems for automatically monitoring the health of farm animals. The popularity of milking robots also offers a new and unique possibility to monitor animals in a single confined space up to four times daily. Lameness is a crucial welfare issue in the modern dairy industry. Limb disorders cause serious welfare, health and economic problems especially in loose housing of cattle. Lameness causes losses in milk production and leads to early culling of animals. These costs could be reduced with early identification and treatment. At present, only a few methods for automatically detecting lameness have been developed, and the most common methods used for lameness detection and assessment are various visual locomotion scoring systems. The problem with locomotion scoring is that it needs experience to be conducted properly, it is labour intensive as an on-farm method and the results are subjective. A four balance system for measuring the leg load distribution of dairy cows during milking in order to detect lameness was developed and set up in the University of Helsinki Research farm Suitia. The leg weights of 73 cows were successfully recorded during almost 10,000 robotic milkings over a period of 5 months. The cows were locomotion scored weekly, and the lame cows were inspected clinically for hoof lesions. Unsuccessful measurements, caused by cows standing outside the balances, were removed from the data with a special algorithm, and the mean leg loads and the number of kicks during milking was calculated. In order to develop an expert system to automatically detect lameness cases, a model was needed. A probabilistic neural network (PNN) classifier model was chosen for the task. The data was divided in two parts and 5,074 measurements from 37 cows were used to train the model. The operation of the model was evaluated for its ability to detect lameness in the validating dataset, which had 4,868 measurements from 36 cows. The model was able to classify 96% of the measurements correctly as sound or lame cows, and 100% of the lameness cases in the validation data were identified. The number of measurements causing false alarms was 1.1%. The developed model has the potential to be used for on-farm decision support and can be used in a real-time lameness monitoring system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Inadvertent climate modification has led to an increase in urban temperatures compared to the surrounding rural area. The main reason for the temperature rise is the altered energy portioning of input net radiation to heat storage and sensible and latent heat fluxes in addition to the anthropogenic heat flux. The heat storage flux and anthropogenic heat flux have not yet been determined for Helsinki and they are not directly measurable. To the contrary, turbulent fluxes of sensible and latent heat in addition to net radiation can be measured, and the anthropogenic heat flux together with the heat storage flux can be solved as a residual. As a result, all inaccuracies in the determination of the energy balance components propagate to the residual term and special attention must be paid to the accurate determination of the components. One cause of error in the turbulent fluxes is the fluctuation attenuation at high frequencies which can be accounted for by high frequency spectral corrections. The aim of this study is twofold: to assess the relevance of high frequency corrections to water vapor fluxes and to assess the temporal variation of the energy fluxes. Turbulent fluxes of sensible and latent heat have been measured at SMEAR III station, Helsinki, since December 2005 using the eddy covariance technique. In addition, net radiation measurements have been ongoing since July 2007. The used calculation methods in this study consist of widely accepted eddy covariance data post processing methods in addition to Fourier and wavelet analysis. The high frequency spectral correction using the traditional transfer function method is highly dependent on relative humidity and has an 11% effect on the latent heat flux. This method is based on an assumption of spectral similarity which is shown not to be valid. A new correction method using wavelet analysis is thus initialized and it seems to account for the high frequency variation deficit. Anyhow, the resulting wavelet correction remains minimal in contrast to the traditional transfer function correction. The energy fluxes exhibit a behavior characteristic for urban environments: the energy input is channeled to sensible heat as latent heat flux is restricted by water availability. The monthly mean residual of the energy balance ranges from 30 Wm-2 in summer to -35 Wm-2 in winter meaning a heat storage to the ground during summer. Furthermore, the anthropogenic heat flux is approximated to be 50 Wm-2 during winter when residential heating is important.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Metabolism is the cellular subsystem responsible for generation of energy from nutrients and production of building blocks for larger macromolecules. Computational and statistical modeling of metabolism is vital to many disciplines including bioengineering, the study of diseases, drug target identification, and understanding the evolution of metabolism. In this thesis, we propose efficient computational methods for metabolic modeling. The techniques presented are targeted particularly at the analysis of large metabolic models encompassing the whole metabolism of one or several organisms. We concentrate on three major themes of metabolic modeling: metabolic pathway analysis, metabolic reconstruction and the study of evolution of metabolism. In the first part of this thesis, we study metabolic pathway analysis. We propose a novel modeling framework called gapless modeling to study biochemically viable metabolic networks and pathways. In addition, we investigate the utilization of atom-level information on metabolism to improve the quality of pathway analyses. We describe efficient algorithms for discovering both gapless and atom-level metabolic pathways, and conduct experiments with large-scale metabolic networks. The presented gapless approach offers a compromise in terms of complexity and feasibility between the previous graph-theoretic and stoichiometric approaches to metabolic modeling. Gapless pathway analysis shows that microbial metabolic networks are not as robust to random damage as suggested by previous studies. Furthermore the amino acid biosynthesis pathways of the fungal species Trichoderma reesei discovered from atom-level data are shown to closely correspond to those of Saccharomyces cerevisiae. In the second part, we propose computational methods for metabolic reconstruction in the gapless modeling framework. We study the task of reconstructing a metabolic network that does not suffer from connectivity problems. Such problems often limit the usability of reconstructed models, and typically require a significant amount of manual postprocessing. We formulate gapless metabolic reconstruction as an optimization problem and propose an efficient divide-and-conquer strategy to solve it with real-world instances. We also describe computational techniques for solving problems stemming from ambiguities in metabolite naming. These techniques have been implemented in a web-based sofware ReMatch intended for reconstruction of models for 13C metabolic flux analysis. In the third part, we extend our scope from single to multiple metabolic networks and propose an algorithm for inferring gapless metabolic networks of ancestral species from phylogenetic data. Experimenting with 16 fungal species, we show that the method is able to generate results that are easily interpretable and that provide hypotheses about the evolution of metabolism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis we study a series of multi-user resource-sharing problems for the Internet, which involve distribution of a common resource among participants of multi-user systems (servers or networks). We study concurrently accessible resources, which for end-users may be exclusively accessible or non-exclusively. For all kinds we suggest a separate algorithm or a modification of common reputation scheme. Every algorithm or method is studied from different perspectives: optimality of protocols, selfishness of end users, fairness of the protocol for end users. On the one hand the multifaceted analysis allows us to select the most suited protocols among a set of various available ones based on trade-offs of optima criteria. On the other hand, the future Internet predictions dictate new rules for the optimality we should take into account and new properties of the networks that cannot be neglected anymore. In this thesis we have studied new protocols for such resource-sharing problems as the backoff protocol, defense mechanisms against Denial-of-Service, fairness and confidentiality for users in overlay networks. For backoff protocol we present analysis of a general backoff scheme, where an optimization is applied to a general-view backoff function. It leads to an optimality condition for backoff protocols in both slot times and continuous time models. Additionally we present an extension for the backoff scheme in order to achieve fairness for the participants in an unfair environment, such as wireless signal strengths. Finally, for the backoff algorithm we suggest a reputation scheme that deals with misbehaving nodes. For the next problem -- denial-of-service attacks, we suggest two schemes that deal with the malicious behavior for two conditions: forged identities and unspoofed identities. For the first one we suggest a novel most-knocked-first-served algorithm, while for the latter we apply a reputation mechanism in order to restrict resource access for misbehaving nodes. Finally, we study the reputation scheme for the overlays and peer-to-peer networks, where resource is not placed on a common station, but spread across the network. The theoretical analysis suggests what behavior will be selected by the end station under such a reputation mechanism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main focus of this study is the epilogue of 4QMMT (4QMiqsat Ma aseh ha-Torah), a text of obscure genre containing a halakhic section found in cave 4 at Qumran. In the official edition published in the series Discoveries of the Judaean Desert (DJD X), the extant document was divided by its editors, Elisha Qimron and John Strugnell, into three literary divisions: Section A) the calendar section representing a 364-day solar calendar, Section B) the halakhot, and Section C) an epilogue. The work begins with text critical inspection of the manuscripts containing text from the epilogue (mss 4Q397, 4Q398, and 4Q399). However, since the relationship of the epilogue to the other sections of the whole document 4QMMT is under investigation, the calendrical fragments (4Q327 and 4Q394 3-7, lines 1-3) and the halakhic section also receive some attention, albeit more limited and purpose oriented. In Ch. 2, after a transcription of the fragments of the epilogue, a synopsis is presented in order to evaluate the composite text of the DJD X edition in light of the evidence provided by the individual manuscripts. As a result, several critical comments are offered, and finally, an alternative arrangement of the fragments of the epilogue with an English translation. In the following chapter (Ch. 3), the diversity of the two main literary divisions, the halakhic section and the epilogue, is discussed, and it is demonstrated that the author(s) of 4QMMT adopted and adjusted the covenantal pattern known from biblical law collections, more specifically Deuteronomy. The question of the genre of 4QMMT is investigated in Ch. 4. The final chapter (Ch. 5) contains an analysis of the use of Scripture in the epilogue. In a close reading, both the explicit citations and the more subtle allusions are investigated in an attempt to trace the theology of the epilogue. The main emphases of the epilogue are covenantal faithfulness, repentance and return. The contents of the document reflect a grave concern for the purity of the cult in Jerusalem, and in the epilogue Deuteronomic language and expressions are used to convince the readers of the necessity of a reformation. The large number of late copies found in cave 4 at Qumran witness the significance of 4QMMT and the continuous importance of the Jerusalem Temple for the Qumran community.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Autoimmune diseases are a major health problem. Usually autoimmune disorders are multifactorial and their pathogenesis involves a combination of predisposing variations in the genome and other factors such as environmental triggers. APECED (autoimmune polyendocrinopathy-candidiasis-ectodermal dystrophy) is a rare, recessively inherited, autoimmune disease caused by mutations in a single gene. Patients with APECED suffer from several organ-specific autoimmune disorders, often affecting the endocrine glands. The defective gene, AIRE, codes for a transcriptional regulator. The AIRE (autoimmune regulator) protein controls the expression of hundreds of genes, representing a substantial subset of tissue-specific antigens which are presented to developing T cells in the thymus and has proven to be a key molecule in the establishment of immunological tolerance. However, the molecular mechanisms by which AIRE mediates its functions are still largely obscure. The aim of this thesis has been to elucidate the functions of AIRE by studying the molecular interactions it is involved in by utilizing different cultured cell models. A potential molecular mechanism for exceptional, dominant, inheritance of APECED in one family, carrying a glycine 228 to tryptophan (G228W) mutation, was described in this thesis. It was shown that the AIRE polypeptide with G228W mutation has a dominant negative effect by binding the wild type AIRE and inhibiting its transactivation capacity in vitro. The data also emphasizes the importance of homomultimerization of AIRE in vivo. Furthermore, two novel protein families interacting with AIRE were identified. The importin alpha molecules regulate the nuclear import of AIRE by binding to the nuclear localization signal of AIRE, delineated as a classical monopartite signal sequence. The interaction of AIRE with PIAS E3 SUMO ligases, indicates a link to the sumoylation pathway, which plays an important role in the regulation of nuclear architecture. It was shown that AIRE is not a target for SUMO modification but enhances the localization of SUMO1 and PIAS1 proteins to nuclear bodies. Additional support for the suggestion that AIRE would preferably up-regulate genes with tissue-specific expression pattern and down-regulate housekeeping genes was obtained from transactivation studies performed with two models: human insulin and cystatin B promoters. Furthermore, AIRE and PIAS activate the insulin promoter concurrently in a transactivation assay, indicating that their interaction is biologically relevant. Identification of novel interaction partners for AIRE provides us information about the molecular pathways involved in the establishment of immunological tolerance and deepens our understanding of the role played by AIRE not only in APECED but possibly also in several other autoimmune diseases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first aim of the current study was to evaluate the survival of total hip arthroplasty (THA) in patients aged 55 years and older on a nation-wide level. The second aim was to evaluate, on a nation wide-basis, the geographical variation of the incidence of primary THA for primary OA and also to identify those variables that are possibly associated with this variation. The third aim was to evaluate the effects of hospital volume: on the length of stay, on the numbers of re-admissions and on the numbers of complications of THR on population-based level in Finland. The survival of implants was analysed based on data from the Finnish Arthroplasty Register. The incidence and hospital volume data were obtained from the Hospital Discharge Register. Cementless total hip replacements had a significantly reduced risk of revision for aseptic loosening compared with cemented hip replacements. When revision for any reason was the end point in the survival analyses, there were no significant differences found between the groups. Adjusted incidence ratios of THA varied from 1.9- to 3.0-fold during the study period. Neither the average income within a region nor the morbidity index was associated with the incidence of THA. For the four categories of volume of total hip replacements performed per hospital, the length of the surgical treatment period was shorter for the highest volume group than for the lowest volume group. The odds ratio for dislocations was significantly lower in the high volume group than in the low volume group. In patients who were 55 years of age or older, the survival of cementless total hip replacements was as good as that of the cemented replacements. However, multiple wear-related revisions of the cementless cups indicate that excessive polyethylene wear was a major clinical problem with modular cementless cups. The variation in the long-term rates of survival for different cemented stems was considerable. Cementless proximal porous-coated stems were found to be a good option for elderly patients. When hip surgery was performed on with a large repertoire, the indications to perform THAs due to primary OA were tight. Socio-economic status of the patient had no apparent effect on THA rate. Specialization of hip replacements in high volume hospitals should reduce costs by significantly shortening the length of stay, and may reduce the dislocation rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first quarter of the 20th century witnessed a rebirth of cosmology, study of our Universe, as a field of scientific research with testable theoretical predictions. The amount of available cosmological data grew slowly from a few galaxy redshift measurements, rotation curves and local light element abundances into the first detection of the cos- mic microwave background (CMB) in 1965. By the turn of the century the amount of data exploded incorporating fields of new, exciting cosmological observables such as lensing, Lyman alpha forests, type Ia supernovae, baryon acoustic oscillations and Sunyaev-Zeldovich regions to name a few. -- CMB, the ubiquitous afterglow of the Big Bang, carries with it a wealth of cosmological information. Unfortunately, that information, delicate intensity variations, turned out hard to extract from the overall temperature. Since the first detection, it took nearly 30 years before first evidence of fluctuations on the microwave background were presented. At present, high precision cosmology is solidly based on precise measurements of the CMB anisotropy making it possible to pinpoint cosmological parameters to one-in-a-hundred level precision. The progress has made it possible to build and test models of the Universe that differ in the way the cosmos evolved some fraction of the first second since the Big Bang. -- This thesis is concerned with the high precision CMB observations. It presents three selected topics along a CMB experiment analysis pipeline. Map-making and residual noise estimation are studied using an approach called destriping. The studied approximate methods are invaluable for the large datasets of any modern CMB experiment and will undoubtedly become even more so when the next generation of experiments reach the operational stage. -- We begin with a brief overview of cosmological observations and describe the general relativistic perturbation theory. Next we discuss the map-making problem of a CMB experiment and the characterization of residual noise present in the maps. In the end, the use of modern cosmological data is presented in the study of an extended cosmological model, the correlated isocurvature fluctuations. Current available data is shown to indicate that future experiments are certainly needed to provide more information on these extra degrees of freedom. Any solid evidence of the isocurvature modes would have a considerable impact due to their power in model selection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study offers a reconstruction and critical evaluation of globalization theory, a perspective that has been central for sociology and cultural studies in recent decades, from the viewpoint of media and communications. As the study shows, sociological and cultural globalization theorists rely heavily on arguments concerning media and communications, especially the so-called new information and communication technologies, in the construction of their frameworks. Together with deepening the understanding of globalization theory, the study gives new critical knowledge of the problematic consequences that follow from such strong investment in media and communications in contemporary theory. The book is divided into four parts. The first part presents the research problem, the approach and the theoretical contexts of the study. Followed by the introduction in Chapter 1, I identify the core elements of globalization theory in Chapter 2. At the heart of globalization theory is the claim that recent decades have witnessed massive changes in the spatio-temporal constitution of society, caused by new media and communications in particular, and that these changes necessitate the rethinking of the foundations of social theory as a whole. Chapter 3 introduces three paradigms of media research the political economy of media, cultural studies and medium theory the discussion of which will make it easier to understand the key issues and controversies that emerge in academic globalization theorists treatment of media and communications. The next two parts offer a close reading of four theorists whose works I use as entry points into academic debates on globalization. I argue that we can make sense of mainstream positions on globalization by dividing them into two paradigms: on the one hand, media-technological explanations of globalization and, on the other, cultural globalization theory. As examples of the former, I discuss the works of Manuel Castells (Chapter 4) and Scott Lash (Chapter 5). I maintain that their analyses of globalization processes are overtly media-centric and result in an unhistorical and uncritical understanding of social power in an era of capitalist globalization. A related evaluation of the second paradigm (cultural globalization theory), as exemplified by Arjun Appadurai and John Tomlinson, is presented in Chapter 6. I argue that due to their rejection of the importance of nation states and the notion of cultural imperialism for cultural analysis, and their replacement with a framework of media-generated deterritorializations and flows, these theorists underplay the importance of the neoliberalization of cultures throughout the world. The fourth part (Chapter 7) presents a central research finding of this study, namely that the media-centrism of globalization theory can be understood in the context of the emergence of neoliberalism. I find it problematic that at the same time when capitalist dynamics have been strengthened in social and cultural life, advocates of globalization theory have directed attention to media-technological changes and their sweeping socio-cultural consequences, instead of analyzing the powerful material forces that shape the society and the culture. I further argue that this shift serves not only analytical but also utopian functions, that is, the longing for a better world in times when such longing is otherwise considered impracticable.