15 resultados para Speed Limit Signs.

em Helda - Digital Repository of University of Helsinki


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Miniaturization of analytical instrumentation is attracting growing interest in response to the explosive demand for rapid, yet sensitive analytical methods and low-cost, highly automated instruments for pharmaceutical and bioanalyses and environmental monitoring. Microfabrication technology in particular, has enabled fabrication of low-cost microdevices with a high degree of integrated functions, such as sample preparation, chemical reaction, separation, and detection, on a single microchip. These miniaturized total chemical analysis systems (microTAS or lab-on-a-chip) can also be arrayed for parallel analyses in order to accelerate the sample throughput. Other motivations include reduced sample consumption and waste production as well as increased speed of analysis. One of the most promising hyphenated techniques in analytical chemistry is the combination of a microfluidic separation chip and mass spectrometer (MS). In this work, the emerging polymer microfabrication techniques, ultraviolet lithography in particular, were exploited to develop a capillary electrophoresis (CE) separation chip which incorporates a monolithically integrated electrospray ionization (ESI) emitter for efficient coupling with MS. An epoxy photoresist SU-8 was adopted as structural material and characterized with respect to its physicochemical properties relevant to chip-based CE and ESI/MS, namely surface charge, surface interactions, heat transfer, and solvent compatibility. As a result, SU-8 was found to be a favorable material to substitute for the more commonly used glass and silicon in microfluidic applications. In addition, an infrared (IR) thermography was introduced as direct, non-intrusive method to examine the heat transfer and thermal gradients during microchip-CE. The IR data was validated through numerical modeling. The analytical performance of SU-8-based microchips was established for qualitative and quantitative CE-ESI/MS analysis of small drug compounds, peptides, and proteins. The CE separation efficiency was found to be similar to that of commercial glass microchips and conventional CE systems. Typical analysis times were only 30-90 s per sample indicating feasibility for high-throughput analysis. Moreover, a mass detection limit at the low-attomole level, as low as 10E+5 molecules, was achieved utilizing MS detection. The SU-8 microchips developed in this work could also be mass produced at low cost and with nearly identical performance from chip to chip. Until this work, the attempts to combine CE separation with ESI in a chip-based system, amenable to batch fabrication and capable of high, reproducible analytical performance, have not been successful. Thus, the CE-ESI chip developed in this work is a substantial step toward lab-on-a-chip technology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation contributes to the fields of theoretical translation studies, semiotic translation theory, and the semiotics of translation. The aim of this work is to explore the alternative and potential which the semiotic approaches to translation entail from the viewpoint of contemporary translation studies. The overall objective is thus to show that a general semiotic translation theory, and in particular, a Peircean translation theory, are possible and indispensable. Furthermore, this study contributes to the semiotranslational approach and to its theory-building by developing the concept of abductive translation (studies). The specific theoretical frame of reference adopted in this study is provided by the semiotranslation introduced by Dinda L. Gorlée. This approach is primarily based on the semeiotic of Charles Sanders Peirce (1839 1914), and aims at a fusion of semiotics and translation studies. A more general framework is provided by the threefold background and material: the published and unpublished writings of Peirce, Peirce scholarship and Peircean-semiotic publications, as well as the translation-theoretical literature. Part One of this study concentrates on the justification, existence, and nature of the semiotic approaches to translation. This part provides a historical survey, a status report, and a discussion of this area of research, by employing the findings in a boundary-clearing that is multilayered both conceptually and terminologically. Part Two deals with Peircean semiotranslation. Here Gorlée s semiotranslational research is examined by focusing on the starting points, features, and development of semiotranslation. Attention is also paid to the state-of-the-art of semiotranslation theory and to the possibilities for future elaborations. Part Three focuses on the semiotranslational claim that translation is an abductive activity. The concept of abductive translation is based on abduction, one of Peirce s three modes of reasoning; at the same time Firstness, the category of abduction, becomes foregrounded. So abductive translation as a form of possibilistic translation receives here an extensive theoretical discussion by citing examples in which abduction manifests itself as (scientific) reasoning and as everyday contemplation. During this treatise, translation is first equated with sign action, then with interpretation and finally with reasoning. All these approaches appear to embody different facets of the same phenomenon Peirce s ubiquitous semiosis, and they all suggest that translation is inherently an intersemiotic activity in which a sign is inferred from another sign. Translation is therefore semiosis, semiosis is translation and interpretation, interpretation is reasoning, and so on ad infinitum all being manifestations of the art of marshalling signs. The three parts of this study are linked by the overall goal of abductive translation studies: investigation into abductive translation develops the theory of semiotranslation, and this enrichment of semiotranslation in turn constructs a semiotic paradigm within translation studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Books Paths to Readers describes the history of the origins and consolidation of modern and open book stores in Finland 1740 1860. The thesis approaches the book trade as a part of a print culture. Instead of literary studies choice to concentrate on texts and writers, book history seeks to describe the print culture of a society and how the literary activities and societies interconnect. For book historians, printed works are creations of various individuals and groups: writers, printers, editors, book sellers, censors, critics and finally, readers. They all take part in the creation, delivery and interpretation of printed works. The study reveals the ways selling and distributing books have influenced the printed works and the literary and print culture. The research period 1740 1860 covers the so-called second revolution of the book, or the modernisation of the print culture. The thesis describes the history of 60 book stores and their 96 owners. The study concentrates on three themes: firstly, how the particular book trade network became a central institution for printed works distribution, secondly what were the relations between cosmopolitan European book markets and the national cultural sphere, and thirdly how book stores functioned as cultural institutions and business enterprises. Book stores that have a varied assortment and are targeted to all readers became the main institution for book trade in Finland during 1740 1860. It happened because of three features. First, the book binders monopoly on selling bound copies in Sweden was abolished in 1740s. As a consequence entrepreneurs could concentrate solely to trade activities and offer copies from various publishers at their stores. Secondly the common business model of bartering was replaced by selling copies for cash, first in the German book trade centre Leipzig in 1770s. The change intensified book markets activities and Finnish book stores foreign connections. Thirdly, after Finland was annexed to the Russian empire in 1809, the Grand duchy s administration steered foreign book trade to book stores (because of censorship demands). Up to 1830 s book stores were available only in Helsinki and Turku. During next ten years book stores opened in six regional centres. The early entrepreneurs ran usually vertical businesses consisting of printing, publishing and distribution activities. This strategy lowered costs, eased the delivery of printed works and helped to create elaborated centres for all book activities. These book stores main clientele consisted of the Swedish speaking gentry. During late 1840s various opinion leaders called for the development of a national Finnish print culture, and also book stores. As a result, during the five years before the beginning of the Crimean war (1853 1856) book stores were opened in almost all Finnish towns: at the beginning of the war 36 book stores operated in 21 towns. The later book sellers, mainly functioning in small towns among Finnish speaking people, settled usually strictly for selling activities. Book stores received most of their revenues from selling foreign titles. Swedish, German, French and Belgian (pirate editions of popular French novels) books were widely available for the multilingual gentry. Foreign titles and copies brought in most of the revenues. Censorship inspections or unfavourable custom fees would not limit the imports. Even if the local Finnish print production steadily rose, many copies, even titles, were never delivered via book stores. Only during the 1840 s and 1850 s the most advanced publishers would concentrate on creating publishing programmes and delivering their titles via book stores. Book sellers regulated commissions were small. They got even smaller because of large amounts of unsold copies, various and usual misunderstandings of consignments and accounts or plain accidents that destroyed shipments and warehouses. Also, the cultural aim of a creating large and assortments and the tendency of short selling periods demanded professional entrepreneurship, which many small town book sellers however lacked. In the midst of troublesome business efforts, co-operation and mutual concern of the book market s entrepreneurs were the key elements of the trade, although on local level book sellers would compete, sometimes even ferociously. The difficult circumstances (new censorship decree of 1850, Crimean war) and lack of entrepreneurship, experience and customers meant that half of the book stores opened in 1845 1860 was shut in less than five years. In 1858 the few leading publishers established The Finnish Book Publishers Association. Its first task was to create new business rules and manners for the book trade. The association s activities began to professionalise the whole network, but at the same time the earlier independence of regional publishing and selling enterprises diminished greatly. The consolidation of modern and open book store network in Finland is a history of a slow and complex development without clear signs of a beginning or an end. The ideal book store model was rarely accomplished in its all features. Nevertheless, book stores became the norm of the book trade. They managed to offer larger selections, reached larger clienteles and maintained constant activity better than any other book distribution model. In essential, the book stores methods have not changed up to present times.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sepsis is associated with a systemic inflammatory response. It is characterised by an early proinflammatory response and followed by a state of immunosuppression. In order to improve the outcome of patients with infection and sepsis, novel therapies that influence the systemic inflammatory response are being developed and utilised. Thus, an accurate and early diagnosis of infection and evaluation of immune state are crucial. In this thesis, various markers of systemic inflammation were studied with respect to enhancing the diagnostics of infection and of predicting outcome in patients with suspected community-acquired infection. A total of 1092 acutely ill patients admitted to a university hospital medical emergency department were evaluated, and 531 patients with a suspicion of community-acquired infection were included for the analysis. Markers of systemic inflammation were determined from a blood sample obtained simultaneously with a blood culture sample on admission to hospital. Levels of phagocyte CD11b/CD18 and CD14 expression were measured by whole blood flow cytometry. Concentrations of soluble CD14, interleukin (IL)-8, and soluble IL-2 receptor α (sIL-2Rα) were determined by ELISA, those of sIL-2R, IL-6, and IL-8 by a chemiluminescent immunoassay, that of procalcitonin by immunoluminometric assay, and that of C-reactive protein by immunoturbidimetric assay. Clinical data were collected retrospectively from the medical records. No marker of systemic inflammation, neither CRP, PCT, IL-6, IL-8, nor sIL-2R predicted bacteraemia better than did the clinical signs of infection, i.e., the presence of infectious focus or fever or both. IL-6 and PCT had the highest positive likelihood ratios to identify patients with hidden community-acquired infection. However, the use of a single marker failed to detect all patients with infection. A combination of markers including a fast-responding reactant (CD11b expression), a later-peaking reactant (CRP), and a reactant originating from inflamed tissues (IL-8) detected all patients with infection. The majority of patients (86.5%) with possible but not verified infection showed levels exceeding at least one cut-off limit of combination, supporting the view that infection was the cause of their acute illness. The 28-day mortality of patients with community-acquired infection was low (3.4%). On admission to hospital, the low expression of cell-associated lipopolysaccharide receptor CD14 (mCD14) was predictive for 28-day mortality. In the patients with severe forms of community-acquired infection, namely pneumonia and sepsis, high levels of soluble CD14 alone did not predict mortality, but a high sCD14 level measured simultaneously with a low mCD14 raised the possibility of poor prognosis. In conclusion, to further enhance the diagnostics of hidden community-acquired infection, a combination of inflammatory markers is useful; 28-day mortality is associated with low levels of mCD14 expression at an early phase of the disease.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Standards have been placed to regulate the microbial and preservative contents to assure that foods are safe to the consumer. In a case of a food-related disease outbreak, it is crucial to be able to detect and identify quickly and accurately the cause of the disease. In addition, for every day control of food microbial and preservative contents, the detection methods must be easily performed for numerous food samples. In this present study, quicker alternative methods were studied for identification of bacteria by DNA fingerprinting. A flow cytometry method was developed as an alternative to pulsed-field gel electrophoresis, the golden method . DNA fragment sizing by an ultrasensitive flow cytometer was able to discriminate species and strains in a reproducible and comparable manner to pulsed-field gel electrophoresis. This new method was hundreds times faster and 200,000 times more sensitive. Additionally, another DNA fingerprinting identification method was developed based on single-enzyme amplified fragment length polymorphism (SE-AFLP). This method allowed the differentiation of genera, species, and strains of pathogenic bacteria of Bacilli, Staphylococci, Yersinia, and Escherichia coli. These fingerprinting patterns obtained by SE-AFLP were simpler and easier to analyze than those by the traditional amplified fragment length polymorphism by double enzyme digestion. Nisin (E234) is added as a preservative to different types of foods, especially dairy products, around the world. Various detection methods exist for nisin, but they lack in sensitivity, speed or specificity. In this present study, a sensitive nisin-induced green fluorescent protein (GFPuv) bioassay was developed using the Lactococcus lactis two-component signal system NisRK and the nisin-inducible nisA promoter. The bioassay was extremely sensitive with detection limit of 10 pg/ml in culture supernatant. In addition, it was compatible for quantification from various food matrices, such as milk, salad dressings, processed cheese, liquid eggs, and canned tomatoes. Wine has good antimicrobial properties due to its alcohol concentration, low pH, and organic content and therefore often assumed to be microbially safe to consume. Another aim of this thesis was to study the microbiota of wines returned by customers complaining of food-poisoning symptoms. By partial 16S rRNA gene sequence analysis, ribotyping, and boar spermatozoa motility assay, it was identified that one of the wines contained a Bacillus simplex BAC91, which produced a heat-stable substance toxic to the mitochondria of sperm cells. The antibacterial activity of wine was tested on the vegetative cells and spores of B. simplex BAC91, B. cereus type strain ATCC 14579 and cereulide-producing B. cereus F4810/72. Although the vegetative cells and spores of B. simplex BAC91 were sensitive to the antimicrobial effects of wine, the spores of B. cereus strains ATCC 14579 and F4810/72 stayed viable for at least 4 months. According to these results, Bacillus spp., more specifically spores, can be a possible risk to the wine consumer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In visual object detection and recognition, classifiers have two interesting characteristics: accuracy and speed. Accuracy depends on the complexity of the image features and classifier decision surfaces. Speed depends on the hardware and the computational effort required to use the features and decision surfaces. When attempts to increase accuracy lead to increases in complexity and effort, it is necessary to ask how much are we willing to pay for increased accuracy. For example, if increased computational effort implies quickly diminishing returns in accuracy, then those designing inexpensive surveillance applications cannot aim for maximum accuracy at any cost. It becomes necessary to find trade-offs between accuracy and effort. We study efficient classification of images depicting real-world objects and scenes. Classification is efficient when a classifier can be controlled so that the desired trade-off between accuracy and effort (speed) is achieved and unnecessary computations are avoided on a per input basis. A framework is proposed for understanding and modeling efficient classification of images. Classification is modeled as a tree-like process. In designing the framework, it is important to recognize what is essential and to avoid structures that are narrow in applicability. Earlier frameworks are lacking in this regard. The overall contribution is two-fold. First, the framework is presented, subjected to experiments, and shown to be satisfactory. Second, certain unconventional approaches are experimented with. This allows the separation of the essential from the conventional. To determine if the framework is satisfactory, three categories of questions are identified: trade-off optimization, classifier tree organization, and rules for delegation and confidence modeling. Questions and problems related to each category are addressed and empirical results are presented. For example, related to trade-off optimization, we address the problem of computational bottlenecks that limit the range of trade-offs. We also ask if accuracy versus effort trade-offs can be controlled after training. For another example, regarding classifier tree organization, we first consider the task of organizing a tree in a problem-specific manner. We then ask if problem-specific organization is necessary.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Measurement of fractional exhaled nitric oxide (FENO) has proven useful in assessment of patients with respiratory symptoms, especially in predicting steroid response. The objective of these studies was to clarify issues relevant for the clinical use of FENO. The influence of allergic sensitization per se on FENO in healthy asymptomatic subjects was studied, the association between airway inflammation and bronchial hyperresponsiveness (BHR) in steroid-naive subjects with symptoms suggesting asthma was examined, as well as the possible difference in this association between atopic and nonatopic subjects. Influence of smoking on FENO was compared between atopic and nonatopic steroid-naive asthmatics and healthy subjects. The short-term repeatability of FENO in COPD patients was examined in order to assess whether the degree of chronic obstruction influences the repeatability. For these purposes, we studied a random sample of 248 citizens of Helsinki, 227 army conscripts with current symptoms suggesting asthma, 19 COPD patients, and 39 healthy subjects. FENO measurement, spirometry and bronchodilatation test, structured interview. skin prick tests, and histamine and exercise challenges were performed. Among healthy subjects with no signs of airway diseases, median FENO was similar in skin prick test-positive and –negative subjects, and the upper normal limit of FENO was 30 ppb. In atopic and nonatopic subjects with symptoms suggesting asthma, FENO associated with severity of exercise- or histamine-induced BHR only in atopic patients. FENO in smokers with steroid-naive asthma was significantly higher than in healthy smokers and nonsmokers. Among atopic asthmatics, FENO was significantly lower in smokers than in nonsmokers, whereas no difference appeared among nonatopic asthmatics. The 24-h repeatability of FENO was equally good in COPD patients as in healthy subjects. These findings indicate that allergic sensitization per se does not influence FENO, supporting the view that elevated FENO indicates NO-producing airway inflammation, and that same reference range can be applied to both skin prick test-positive and -negative subjects. The significant correlation between FENO and degree of BHR only in atopic steroid-naive subjects with current asthmatic symptoms supports the view that pathogenesis of BHR in atopic asthma is strongly involved in NO-producing airway inflammation, whereas in development of BHR in nonatopic asthma other mechanisms may dominate. Attenuation of FENO only in atopic but not in nonatopic smokers with steroid-naive asthma may result from differences in mechanisms of FENO formation as well as in sensitivity of these mechanisms to smoking in atopic and nonatopic asthma. The results suggest, however, that in young adult smokers, FENO measurement may prove useful in assessment of airway inflammation. The short-term repeatability of FENO in COPD patients with moderate to very severe disease and in healthy subjects was equally good.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of the present study was to investigate the effects of low-intensity ultrasound on bioabsorbable self-reinforced poly-L-lactide (SR-PLLA) screws and on fracture healing after SR-PLLA device fixation in experimental and clinical cancellous bone fracture. In the first experimental study, the assessment of the mechanical strengths of the SR-PLLA screws was performed after 12 weeks of daily 20-minute ultrasound exposure in vitro. In the second experimental study, 32 male Wistar rats with an experimental distal femur osteotomy fixed with an SR-PLLA rod were exposed for daily low-intensity ultrasound treatment for 21 days. The effects on the healing bone were assessed. The clinical studies consist of three prospective, randomized, and placebo-controlled series of dislocated lateral malleolar fractures fixed with one SR-PLLA screw. The total number of the patients in these series was 52. Half of the patients were provided randomly with a sham ultrasound device. The patients underwent ultrasound therapy 20 minutes daily for six weeks. Radiological bone healing was assessed both by radiographs at two, six, nine, and 12 weeks and by multidetector computed tomography (MDCT) scans at two weeks, nine weeks, and 18 months. Bone mineral density was assessed by dual-energy X-ray absorptiometry (DXA). The clinical outcome was assessed by both Olerud-Molander scoring and clinical examination of the ankle. Low-intensity ultrasound had no effects on the mechanical properties and degradation behaviour of the SR-PLLA screws in vitro. There were no obvious signs of low-intensity ultrasound-induced enhancement in the bone healing in SR-PLLA-rod-fixed metaphyseal distal femur osteotomy in rats. The biocompatibility of low-intensity ultrasound treatment and SR-PLLA was found to be good. In the clinical series low-intensity ultrasound was observed to have no obvious effects on the bone mineral density of the fractured lateral malleolus. There were no obvious differences in the radiological bone healing times of the SR-PLLA-screw-fixed lateral malleolar fractures after low-intensity ultrasound treatment. Low-intensity ultrasound did not have any effects on radiological bone morphology, bone mineral density or clinical outcome 18 months after the injury. There were no obvious findings in the present study to support the hypothesis that low-intensity pulsed ultrasound enhances bone healing in SR-PLLA-rod-fixed experimental metaphyseal distal femur osteotomy in rats or in clinical SR-PLLA-screw-fixed lateral malleolar fractures. It is important to limit the conclusions of the present set of studies only to lateral malleolar fractures fixed with an SR-PLLA screw.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Spirometry is the most widely used lung function test in the world. It is fundamental in diagnostic and functional evaluation of various pulmonary diseases. In the studies described in this thesis, the spirometric assessment of reversibility of bronchial obstruction, its determinants, and variation features are described in a general population sample from Helsinki, Finland. This study is a part of the FinEsS study, which is a collaborative study of clinical epidemiology of respiratory health between Finland (Fin), Estonia (Es), and Sweden (S). Asthma and chronic obstructive pulmonary disease (COPD) constitute the two major obstructive airways diseases. The prevalence of asthma has increased, with around 6% of the population in Helsinki reporting physician-diagnosed asthma. The main cause of COPD is smoking with changes in smoking habits in the population affecting its prevalence with a delay. Whereas airway obstruction in asthma is by definition reversible, COPD is characterized by fixed obstruction. Cough and sputum production, the first symptoms of COPD, are often misinterpreted for smokers cough and not recognized as first signs of a chronic illness. Therefore COPD is widely underdiagnosed. More extensive use of spirometry in primary care is advocated to focus smoking cessation interventions on populations at risk. The use of forced expiratory volume in six seconds (FEV6) instead of forced vital capacity (FVC) has been suggested to enable office spirometry to be used in earlier detection of airflow limitation. Despite being a widely accepted standard method of assessment of lung function, the methodology and interpretation of spirometry are constantly developing. In 2005, the ATS/ERS Task Force issued a joint statement which endorsed the 12% and 200 ml thresholds for significant change in forced expiratory volume in one second (FEV1) or FVC during bronchodilation testing, but included the notion that in cases where only FVC improves it should be verified that this is not caused by a longer exhalation time in post-bronchodilator spirometry. This elicited new interest in the assessment of forced expiratory time (FET), a spirometric variable not usually reported or used in assessment. In this population sample, we examined FET and found it to be on average 10.7 (SD 4.3) s and to increase with ageing and airflow limitation in spirometry. The intrasession repeatability of FET was the poorest of the spirometric variables assessed. Based on the intrasession repeatability, a limit for significant change of 3 s was suggested for FET during bronchodilation testing. FEV6 was found to perform equally well as FVC in the population and in a subgroup of subjects with airways obstruction. In the bronchodilation test, decreases were frequently observed in FEV1 and particularly in FVC. The limit of significant increase based on the 95th percentile of the population sample was 9% for FEV1 and 6% for FEV6 and FVC; these are slightly lower than the current limits for single bronchodilation tests (ATS/ERS guidelines). FEV6 was proven as a valid alternative to FVC also in the bronchodilation test and would remove the need to control duration of exhalation during the spirometric bronchodilation test.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Speech has both auditory and visual components (heard speech sounds and seen articulatory gestures). During all perception, selective attention facilitates efficient information processing and enables concentration on high-priority stimuli. Auditory and visual sensory systems interact at multiple processing levels during speech perception and, further, the classical motor speech regions seem also to participate in speech perception. Auditory, visual, and motor-articulatory processes may thus work in parallel during speech perception, their use possibly depending on the information available and the individual characteristics of the observer. Because of their subtle speech perception difficulties possibly stemming from disturbances at elemental levels of sensory processing, dyslexic readers may rely more on motor-articulatory speech perception strategies than do fluent readers. This thesis aimed to investigate the neural mechanisms of speech perception and selective attention in fluent and dyslexic readers. We conducted four functional magnetic resonance imaging experiments, during which subjects perceived articulatory gestures, speech sounds, and other auditory and visual stimuli. Gradient echo-planar images depicting blood oxygenation level-dependent contrast were acquired during stimulus presentation to indirectly measure brain hemodynamic activation. Lip-reading activated the primary auditory cortex, and selective attention to visual speech gestures enhanced activity within the left secondary auditory cortex. Attention to non-speech sounds enhanced auditory cortex activity bilaterally; this effect showed modulation by sound presentation rate. A comparison between fluent and dyslexic readers' brain hemodynamic activity during audiovisual speech perception revealed stronger activation of predominantly motor speech areas in dyslexic readers during a contrast test that allowed exploration of the processing of phonetic features extracted from auditory and visual speech. The results show that visual speech perception modulates hemodynamic activity within auditory cortex areas once considered unimodal, and suggest that the left secondary auditory cortex specifically participates in extracting the linguistic content of seen articulatory gestures. They are strong evidence for the importance of attention as a modulator of auditory cortex function during both sound processing and visual speech perception, and point out the nature of attention as an interactive process (influenced by stimulus-driven effects). Further, they suggest heightened reliance on motor-articulatory and visual speech perception strategies among dyslexic readers, possibly compensating for their auditory speech perception difficulties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We combine results from searches by the CDF and D0 collaborations for a standard model Higgs boson (H) in the process gg->H->W+W- in p=pbar collisions at the Fermilab Tevatron Collider at sqrt{s}=1.96 TeV. With 4.8 fb-1 of integrated luminosity analyzed at CDF and 5.4 fb-1 at D0, the 95% Confidence Level upper limit on \sigma(gg->H) x B(H->W+W-) is 1.75 pb at m_H=120 GeV, 0.38 pb at m_H=165 GeV, and 0.83 pb at m_H=200 GeV. Assuming the presence of a fourth sequential generation of fermions with large masses, we exclude at the 95% Confidence Level a standard-model-like Higgs boson with a mass between 131 and 204 GeV.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Market microstructure is “the study of the trading mechanisms used for financial securities” (Hasbrouck (2007)). It seeks to understand the sources of value and reasons for trade, in a setting with different types of traders, and different private and public information sets. The actual mechanisms of trade are a continually changing object of study. These include continuous markets, auctions, limit order books, dealer markets, or combinations of these operating as a hybrid market. Microstructure also has to allow for the possibility of multiple prices. At any given time an investor may be faced with a multitude of different prices, depending on whether he or she is buying or selling, the quantity he or she wishes to trade, and the required speed for the trade. The price may also depend on the relationship that the trader has with potential counterparties. In this research, I touch upon all of the above issues. I do this by studying three specific areas, all of which have both practical and policy implications. First, I study the role of information in trading and pricing securities in markets with a heterogeneous population of traders, some of whom are informed and some not, and who trade for different private or public reasons. Second, I study the price discovery of stocks in a setting where they are simultaneously traded in more than one market. Third, I make a contribution to the ongoing discussion about market design, i.e. the question of which trading systems and ways of organizing trading are most efficient. A common characteristic throughout my thesis is the use of high frequency datasets, i.e. tick data. These datasets include all trades and quotes in a given security, rather than just the daily closing prices, as in traditional asset pricing literature. This thesis consists of four separate essays. In the first essay I study price discovery for European companies cross-listed in the United States. I also study explanatory variables for differences in price discovery. In my second essay I contribute to earlier research on two issues of broad interest in market microstructure: market transparency and informed trading. I examine the effects of a change to an anonymous market at the OMX Helsinki Stock Exchange. I broaden my focus slightly in the third essay, to include releases of macroeconomic data in the United States. I analyze the effect of these releases on European cross-listed stocks. The fourth and last essay examines the uses of standard methodologies of price discovery analysis in a novel way. Specifically, I study price discovery within one market, between local and foreign traders.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Colour is an essential aspect of our daily life, and still, it is a neglected issue within marketing research. The main reason for studying colours is to understand the impact of colours on consumer behaviour, and thus, colours should be studied when it comes to branding, advertising, packages, interiors, and the clothes of the employees, for example. This was an exploratory study about the impact of colours on packages. The focus was on low-involvement purchasing, where the consumer puts limited effort into the decision-making. The basis was a scenario in which the consumer faces an unpredictable problem needing immediate action. The consumer may be in hurry, which indicate time pressure. The consumer may lack brand preferences, or the preferred brand may be out of stock. The issue is that the choice is to be made at the point of purchase. Further, the purchasing involves product classes where the core products behind the brands are indistinguishable from each other. Three research questions were posed. Two questions were answered by conjoint analysis, i.e. if colours have an impact on decision-making and if a possible impact is related to the product class. 16 hypothetical packages were designed in two product classes within the healthcare, i.e. painkillers and medicine against sore throats. The last research question aimed at detecting how an analysis could be carried out in order to understand the impact of colours. This question was answered by conducting interviews that were analysed by applying laddering method and a semiotics approach. The study found that colours do indeed have an impact on consumer behaviour, this being related to the context, such as product class. The role of colours on packages was found to be threefold: attention, aesthetics, and communication. The study focused on colours as a means of communication, and it proposes that colours convey product, brand, and product class meanings, these meanings having an impact on consumers’ decision-making at the point of purchase. In addition, the study demonstrates how design elements such as colours can be understood by regarding them as non-verbal signs. The study also presents an empirical design, involving quantitative and qualitative techniques that can be used to gain in depth understanding of the impact of design elements on consumer behaviour. Hannele Kauppinen is associated with CERS, the Centre for Relationship Marketing and Service Management of the Swedish School of Economics and Business Administration

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sea level rise is among the most worrying consequences of climate change, and the biggest uncertainty of sea level predictions lies in the future behaviour of the ice sheets of Greenland and Antarctica. In this work, a literature review is made concerning the future of the Greenland ice sheet and the effect of its melting on Baltic Sea level. The relation between sea level and ice sheets is also considered more generally from a theoretical and historical point of view. Lately, surprisingly rapid changes in the amount of ice discharging into the sea have been observed along the coastal areas of the ice sheets, and the mass deficit of Greenland and West Antarctic ice sheets which are considered vulnerable to warming has been increasing from the 1990s. The changes are probably related to atmospheric or oceanic temperature variations which affect the flow speed of ice either via meltwater penetrating to the bottom of the ice sheet or via changes in the flow resistance generated by the floating parts of an ice stream. These phenomena are assumed to increase the mass deficit of the ice sheets in the warming climate; however, there is no comprehensive theory to explain and model them. Thus, it is not yet possible to make reliable predictions of the ice sheet contribution to sea level rise. On the grounds of the historical evidence it appears that sea level can rise rather rapidly, 1 2 metres per century, even during warm climate periods. Sea level rise projections of similar magnitude have been made with so-called semiempirical methods that are based on modelling the link between sea level and global mean temperature. Such a rapid rise would require considerable acceleration of the ice sheet flow. Stronger rise appears rather unlikely, among other things because the mountainous coastline restricts ice discharge from Greenland. The upper limit of sea level rise from Greenland alone has been estimated at half a metre by the end of this century. Due to changes in the Earth s gravity field, the sea level rise caused by melting ice is not spatially uniform. Near the melting ice sheet the sea level rise is considerably smaller than the global average, whereas farther away it is slightly greater than the average. Because of this phenomenon, the effect of the Greenland ice sheet on Baltic Sea level will probably be rather small during this century, 15 cm at most. Melting of the Antarctic ice sheet is clearly more dangerous for the Baltic Sea, but also very uncertain. It is likely that the sea level predictions will become more accurate in the near future as the ice sheet models develop.